I'm updating records from another resource into the database.
// Turn into batches with batchSize == 10
batches := service.TurnIntoBatches(cModels, 10)
// Sync to our db
var wg sync.WaitGroup
customerChannel := make(chan []*model.Customer, len(batches))
for _, batch := range batches {
wg.Add(1)
go UpdateCustomers(batch, &wg, uc.customerRp, customerChannel)
}
wg.Wait()
close(customerChannel)
func UpdateCustomers(customers []*model.Customer,
wg *sync.WaitGroup,
rp repository.CustomerRepositoryInterface,
channel chan []*model.Customer) {
defer wg.Done()
for _, c := range customers {
//err := Update logic
if err != nil {
logger.WithFields(logrus.Fields{
"error": err,
}).WithError(err).Error("Updating failed!")
}
}
}
If I got 50 records, that's 5 batches of 10.
There are 50 insert query, as expected. All of them returned [1 rows affected or returned ]
However, if I query from the database, there are sometimes 45 or 46 records.
What is the problem and how to fix it?
Related
I have the following query which returns results when run directly from mysql.
The same query returns 0 values, when run from golang program.
package main
import (
"github.com/rs/zerolog/log"
_ "github.com/go-sql-driver/mysql"
"github.com/jmoiron/sqlx"
)
var DB *sqlx.DB
func main() {
DB, err := sqlx.Connect("mysql", "root:password#(localhost:3306)/jsl2")
if err != nil {
log.Error().Err(err)
}
sqlstring := `SELECT
salesdetails.taxper, sum(salesdetails.gvalue),
sum(salesdetails.taxamt)
FROM salesdetails
Inner Join sales ON sales.saleskey = salesdetails.saleskey
where
sales.bdate >= '2021-12-01'
and sales.bdate <= '2021-12-31'
and sales.achead IN (401975)
group by salesdetails.taxper
order by salesdetails.taxper`
rows, err := DB.Query(sqlstring)
for rows.Next() {
var taxper int
var taxableValue float64
var taxAmount float64
err = rows.Scan(&taxper, &taxableValue, &taxAmount)
log.Print(taxper, taxableValue, taxAmount)
}
err = rows.Err()
if err != nil {
log.Error().Err(err)
}
}
On the console, running the program returns the following values.
In SQL browser, it returns 4 rows which is correct.
The result from the sql browser for the same query is
0 1278.00 0.00
5 89875.65 4493.78
12 3680.00 441.60
18 94868.73 17076.37
But in the program also return 4 rows with 0 value.
{"level":"debug","time":"2022-01-13T17:07:39+05:30","message":"0 0 0"}
{"level":"debug","time":"2022-01-13T17:07:39+05:30","message":"0 0 0"}
{"level":"debug","time":"2022-01-13T17:07:39+05:30","message":"0 0 0"}
{"level":"debug","time":"2022-01-13T17:07:39+05:30","message":"0 0 0"}
How to set the datatype for the aggregate functions.
I changed the datatype of taxper to float and it worked.
I found this after checking the err from rows.Scan( as suggested by #mkopriva
I've the following code:
const qInstances = `
SELECT
i.uuid,
i.host,
i.hostname
FROM
db.instances AS i
WHERE
i.deleted_at IS NULL
GROUP BY i.uuid;
`
...
instancesMap := make(map[string]*models.InstanceModel)
instances := []models.Instance{}
instancesCount := 0
instancesRow, err := db.Query(qInstances)
if err != nil {
panic(err.Error())
}
defer instancesRow.Close()
for instancesRow.Next() {
i := models.Instance{}
err = instancesRow.Scan(&i.UUID, &i.Host, &i.Hostname)
if err != nil {
log.Printf("[Error] - While Scanning Instances Rows, error msg: %s\n", err)
panic(err.Error())
} else {
if i.UUID.String != "" {
instancesCount++
}
if _, ok := instancesMap[i.UUID.String]; !ok {
instancesMap[i.UUID.String] = &models.InstanceModel{}
inst := instancesMap[i.UUID.String]
inst.UUID = i.UUID.String
inst.Host = i.Host.String
inst.Hostname = i.Hostname.String
} else {
inst := instancesMap[i.UUID.String]
inst.UUID = i.UUID.String
inst.Host = i.Host.String
inst.Hostname = i.Hostname.String
}
instances = append(instances, i)
}
}
log.Printf("[Instances] - Total Count: %d\n", instancesCount)
The problem that I'm facing is that if run the SQL query directly to the database (mariadb) it returns 7150 records, but the total count inside the program outputs 5196 records. I also check the SetConnMaxLifetime parameter for the db connection and set it to 240 seconds, and it doesn't show any errors or broken connectivity between the db and the program. Also I try to do some pagination (LIMIT to 5000 records each) and issue two different queries and the first one returns the 5000 records, but the second one just 196 records. I'm using the "github.com/go-sql-driver/mysql" package. Any ideas?
Found a strange inconsistency upon selecting rows from mysql 8.0.19 after some transaction on resultset (e.g. at mysqlworkbench by editing some rows).
(for reference this function: https://golang.org/pkg/database/sql/#DB.Query)
In others words db.Query(SQL) returns the old state of my resultset (before editing and committing).
MYSQL rows before editing:
105 admin
106 user1
107 user2
109 user3
MYSQL rows after editing:
105 admin
106 user11
107 user22
109 user33
But Golang db.Query(SQL) still continuously returns:
105 admin
106 user1
107 user2
109 user3
Does db.Query(SQL) require being committed to maintain consistency with current database state? Because after I have added db.Begin() and db.Commit() it started to work consistently. Haven't tried other databases, doesn't look like a driver issue, or variable copy issue. It is bit odd coming from JDBC. Autocommit disabled.
The code:
func FindAll(d *sql.DB) ([]*usermodel.User, error) {
const SQL = `SELECT * FROM users t ORDER BY 1`
//tx, _ := d.Begin()
rows, err := d.Query(SQL)
if err != nil {
return nil, err
}
defer func() {
err := rows.Close()
if err != nil {
log.Fatalln(err)
}
}()
l := make([]*usermodel.User, 0)
for rows.Next() {
t := usermodel.NewUser()
if err = rows.Scan(&t.UserId, &t.Username, &t.FullName, &t.PasswordHash, &t.Email, &t.ExpireDate,
&t.LastAuthDate, &t.StateId, &t.CreatedAt, &t.UpdatedAt); err != nil {
return nil, err
}
l = append(l, t)
}
if err = rows.Err(); err != nil {
return nil, err
}
//_ = tx.Commit()
return l, nil
}
This is purely about MySQL MVCC (see https://dev.mysql.com/doc/refman/8.0/en/innodb-multi-versioning.html and https://dev.mysql.com/doc/refman/8.0/en/innodb-transaction-isolation-levels.html) and not Go/DB driver.
In short, if you start a transaction, read some data, then another transaction changes it and commits, you may or may not see the results, depending on the transaction isolation level set on the MySQL server.
Following is my code to get multiple rows from db and it works.
defer db.Close()
for rows.Next() {
err = rows.Scan(&a)
if err != nil {
log(err)
}
How can I check if the rows contains No row?
Even I tried like below
if err == sql.ErrNoRows {
fmt.Print(No rows)
}
and also checked while scanning
err = rows.Scan(&a)
if err == sql.ErrNoRows {
fmt.Print(No rows)
}
I don't understand which one gives ErrNoRows either *Rows or err or Scan
QueryRow returns a *Row (not a *Rows) and you cannot iterate through the results (because it's only expecting a single row back). This means that rows.Scan in your example code will not compile).
If you expect your SQL query to return a single resullt (e.g. you are running a count() or selecting using an ID) then use QueryRow; for example (modified from here):
id := 43
var username string
err = stmt.QueryRow("SELECT username FROM users WHERE id = ?", id).Scan(&username)
switch {
case err == sql.ErrNoRows:
log.Fatalf("no user with id %d", id)
case err != nil:
log.Fatal(err)
default:
log.Printf("username is %s\n", username)
}
If you are expecting multiple rows then use Query() for example (modified from here):
age := 27
rows, err := db.Query("SELECT name FROM users WHERE age=?", age)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
names := make([]string, 0)
for rows.Next() {
var name string
if err := rows.Scan(&name); err != nil {
log.Fatal(err)
}
names = append(names, name)
}
// Check for errors from iterating over rows.
if err := rows.Err(); err != nil {
log.Fatal(err)
}
// Check for no results
if len(names) == 0 {
log.Fatal("No Results")
}
log.Printf("%s are %d years old", strings.Join(names, ", "), age)
The above shows one way of checking if there are no results. If you are not putting the results into a slice/map then you can keep a counter or set a boolean within the loop. Note that no error will be returned if there are no results (because this is a perfectly valid outcome) and the SQL package provides no way to check the number of results other than iterate through them (if all you are interested in is the number of results then run select count(*)...).
I'm pretty new to Go and looking for a way to handle 3000 queries using 100 workers and ensuring a connection for every worker (MySQL is already configured with more than 100 connections). This is my attempt:
package main
import (
"database/sql"
_ "github.com/go-sql-driver/mysql"
)
var query *sql.Stmt
func worker(jobs <-chan int, results chan<- int) {
for _ = range jobs {
_, e := query.Exec("a")
if e != nil {
panic(e.Error())
}
results <- 1
}
}
func main() {
workers := 100
db, e := sql.Open("mysql", "foo:foo#/foo")
if e != nil {
panic(e.Error())
}
db.SetMaxOpenConns(workers)
db.SetMaxIdleConns(workers)
defer db.Close()
query, e = db.Prepare("INSERT INTO foo (foo) values(?)")
if e != nil {
panic(e.Error())
}
total := 30000
jobs := make(chan int, total)
results := make(chan int, total)
for w := 0; w < workers; w++ {
go worker(jobs, results)
}
for j := 0; j < total; j++ {
jobs <- j
}
close(jobs)
for r := 0; r < total; r++ {
<-results
}
}
It's working, but I'm not sure if is the best way of doing it.
Please, if you think this is opinion based or is not a good question at all, just mark it to be closed and leave a comment explaining why.
What you've got fundamentally works, but to get rid of buffering, you need to be writing to jobs and reading from results at the same time. Otherwise, your process ends up stuck--workers can't send results because nothing is receiving them, and you can't insert jobs because workers are blocked.
Here's a boiled-down example on the Playground of how to do a work queue that pushes jobs in the background as it receives results in main:
package main
import "fmt"
func worker(jobs <-chan int, results chan<- int) {
for _ = range jobs {
// ...do work here...
results <- 1
}
}
func main() {
workers := 10
total := 30
jobs := make(chan int)
results := make(chan int)
// start workers
for w := 0; w < workers; w++ {
go worker(jobs, results)
}
// insert jobs in background
go func() {
for j := 0; j < total; j++ {
jobs <- j
}
}()
// collect results
for i := 0; i < total; i++ {
<-results
fmt.Printf(".")
}
close(jobs)
}
For that particular code to work, you have to know how many results you'll get. If you don't know that (say, each job could produce zero or multiple results), you can use a sync.WaitGroup to wait for the workers to finish, then close the result stream:
package main
import (
"fmt"
"sync"
)
func worker(jobs <-chan int, results chan<- int, wg *sync.WaitGroup) {
for _ = range jobs {
// ...do work here...
results <- 1
}
wg.Done()
}
func main() {
workers := 10
total := 30
jobs := make(chan int)
results := make(chan int)
wg := &sync.WaitGroup{}
// start workers
for w := 0; w < workers; w++ {
wg.Add(1)
go worker(jobs, results, wg)
}
// insert jobs in background
go func() {
for j := 0; j < total; j++ {
jobs <- j
}
close(jobs)
wg.Wait()
// all workers are done so no more results
close(results)
}()
// collect results
for _ = range results {
fmt.Printf(".")
}
}
There are many other more complicated tricks one can do to stop all workers after an error happens, put results into the same order as the original jobs, or do other things like that. Sounds as if the basic version works here, though.