Summarizing contents of csv - csv

Context
I'm working on creating a little program that can summarize the contents of an absolute mess of a bill, which is in csv form.
The bill has three columns I'm interested in:
Event type. Here, I'm only interested in the rows where this column reads CHARGE
The cost. Self explanatory.
Resource name, containing Server and cluster names. The format is servername.clustername.
The idea is to select the rows that are labeled as charge, split them up first by cluster and then by server name, and sum up the total costs for each.
I can't help but feel like this should be easy, but I've been scratching my head on this for a while now, and just can't seem to figure it out. At this point I ought to state that I am fairly new to programming and entirely new to GO.
Here's what I have so far:
package main
import (
"encoding/csv"
"log"
"os"
"sort"
"strings"
)
func main() {
rows := readBill("bill-2018-April.csv")
rows = calculateSummary(rows)
writeSummary("bill-2018-April-output", rows)
}
func readBill(name string) [][]string {
f, err := os.Open(name)
if err != nil {
log.Fatalf("Cannot open '%s': %s\n", name, err.Error())
}
defer f.Close()
r := csv.NewReader(f)
rows, err := r.ReadAll()
if err != nil {
log.Fatalln("Cannot read CSV data:", err.Error())
}
return rows
}
type charges struct {
impactType string
cost float64
resName string
}
func createCharges(rows [][]string){
charges:= []charges{}
for i,r:=range rows {
var c charges
c.impactType :=r [i][10]
c.cost := r [i][15]
c.resName := r [i][20]
charges = append()
}
return charges
}
So, as far as I can tell, I should now have isolated the columns I am interested in (i.e. columns 10, 15 and 20). Is what I have so far even correct?
How would I go about singling out the rows reading "CHARGE" and slicing everything up by cluster and server?
Summing things up shouldn't be too tricky, but for whatever reason, this is really stumping me.

Just use two maps to store the sums per server and per cluster. And since you're not interested in the whole CSV but only some rows, reading everything is kind of wasteful. Just skip the rows you don't care about:
package main
import (
"encoding/csv"
"fmt"
"io"
"log"
"strconv"
"strings"
)
func main() {
b := `
,,,,,,,,,,CHARGE,,,,,100.00,,,,,s1.c1
,,,,,,,,,,IGNORE,,,,,,,,,,
,,,,,,,,,,CHARGE,,,,,200.00,,,,,s2.c1
,,,,,,,,,,CHARGE,,,,,300.00,,,,,s3.c2
`
r := csv.NewReader(strings.NewReader(b))
byServer := make(map[string]float64)
byCluster := make(map[string]float64)
for i := 0; ; i++ {
row, err := r.Read()
if err == io.EOF {
break
}
if err != nil {
log.Fatal(err)
}
if row[10] != "CHARGE" {
continue
}
cost, err := strconv.ParseFloat(row[15], 64)
if err != nil {
log.Fatalf("row %d: malformed cost: %v", i, err)
}
xs := strings.SplitN(row[20], ".", 2)
if len(xs) != 2 {
log.Fatalf("row %d: malformed resource name", i)
}
server, cluster := xs[0], xs[1]
byServer[server] += cost
byCluster[cluster] += cost
}
fmt.Printf("byServer: %+v\n", byServer)
fmt.Printf("byCluster: %+v\n", byCluster)
}
// Output:
// byServer: map[s2:200 s3:300 s1:100]
// byCluster: map[c1:300 c2:300]
Try it on the playground: https://play.golang.org/p/1e9mJf4LyYE

Related

How to filter elements of a [][]string slice in Golang?

First of all i'm new here and i'm trying to learn Golang. I would like to check my csv file (which has 3 values; type, maker, model) and create a new one and after a filter operation i want to write new data(filtered) to the created csv file. Here is my code so you can understand me more clearly.
package main
import (
"encoding/csv"
"fmt"
"os"
)
func main() {
//openning my csv file which is vehicles.csv
recordFile, err := os.Open("vehicles.csv")
if err != nil{
fmt.Println("An error encountered ::", err)
}
//reading it
reader := csv.NewReader(recordFile)
vehicles, _ := reader.ReadAll()
//creating a new csv file
newRecordFile, err := os.Create("newCsvFile.csv")
if err != nil{
fmt.Println("An error encountered ::", err)
}
//writing vehicles.csv into the new csv
writer := csv.NewWriter(newRecordFile)
err = writer.WriteAll(vehicles)
if err != nil {
fmt.Println("An error encountered ::", err)
}
}
After i build it, it is working this way. It reads and writes the all data to new created csv file. But the problem here is, i want to filter duplicates of readed csv which is vehicles, i am creating another function (outside of the main function) to filter duplicates but i can't do it because vehicles 's type is [][]string, i searched the internet about filtering duplicates but all i found is int or string types. What i want to do is create a function and call it before WriteAll operation so WriteAll can write the correct (duplicates filtered) data into new csv file. Help me please!!
I appreciate any answer.
Happy coding!
This depends on how you define "uniqueness", but in general there are a few parts of this problem.
What is unique?
All fields must be equal
Only some fields must be equal
Normalize some or all fields before comparing
You have a few approaches for applying your uniqueness, including:
You can use a map, keyed by the "pieces" of uniqueness, requires O(N) state
You can sort the records and compare with the prior record as you iterate, requires O(1) state but is more complicated
You have two approaches for filtering and outputting:
You can build a new slice based on the old one using a loop and write all at once, this requires O(N) space
You can write the records out to the file as you go if you don't need to sort, this requires O(1) space
I think a reasonably simple and performant approach would be to pick (1) from the first, (1) from the second, and (2) from the third, which together would look like:
package main
import (
"encoding/csv"
"errors"
"io"
"log"
"os"
)
func main() {
input, err := os.Open("vehicles.csv")
if err != nil {
log.Fatalf("opening input file: %s", err)
}
output, err := os.Create("vehicles_filtered.csv")
if err != nil {
log.Fatalf("creating output file: %s", err)
}
defer func() {
// Ensure the file is closed at the end of the program
if err := output.Close(); err != nil {
log.Fatalf("finalizing output file: %s", err)
}
}()
reader := csv.NewReader(input)
writer := csv.NewWriter(output)
seen := make(map[[3]string]bool)
for {
// Read in one record
record, err := reader.Read()
if errors.Is(err, io.EOF) {
break
}
if err != nil {
log.Fatalf("reading record: %s", err)
}
if len(record) != 3 {
log.Printf("bad record %q", record)
continue
}
// Check if the record has been seen before, skipping if so
key := [3]string{record[0], record[1], record[2]}
if seen[key] {
continue
}
seen[key] = true
// Write the record
if err := writer.Write(record); err != nil {
log.Fatalf("writing record %d: %s", len(seen), err)
}
}
}

Golang Dynamically Unmarshalling JSON

Is it possible in Go, given the below function, to unmarshall jsonString without knowing the type of c at runtime?
func findChargedItems(fs financialService, conditions []string) ([]*models.ChargedItem, error) {
var jsonResult []string
f := getChargedItemsQuery(conditions)
q, _, _ := f.ToSql()
err := fs.db.Select(&jsonResult, q)
if err != nil {
return nil, err
}
jsonString := fmt.Sprintf("[%v]", strings.Join(jsonResult, ","))
c := make([]*models.ChargedItem, 0)
err = json.Unmarshal([]byte(jsonString), &c)
if err != nil {
return nil, err
}
return c, nil
}
The problem is that I have tons of models that need to do this exact process and I'm repeating myself in order to achieve this. I would like to just have a "generic" function findEntities that operates agnostic of ChargedItem and getChargedItemsQuery. I realize I can just pass a function in for getChargeditemsQuery so that takes care of the that problem, but I am having issues with json.Unmarshal, for instance, when I try to use an interface, the json fields do not map correctly. Is there a way to achieve what I'm trying to do without affecting the data models?
I'm not sure what you're trying to do, but it's probably not a good idea. At any rate, this should do what you are wanting:
package main
import (
"encoding/json"
"fmt"
)
func main() {
// do what youre trying to do
var (
a = []byte("[10, 11]")
b []interface{}
)
json.Unmarshal(a, &b)
// then fix it later
c := make([]float64, len(b))
for n := range c {
c[n] = b[n].(float64)
}
fmt.Println(c)
}

Combining map values into one json?

I am in the process of learning Go and I'm trying to make a program that takes websites stored in a csv in a column. Then queries http://ip-api.com to find out what country the IP address originates from.
However the issue I am running into is my JSON is showing up like this:
[{"country":"Singapore"}]
[{"country":"United States"},{"country":"United States"}]
[{"country":"Singapore"},{"country":"Singapore"},{"country":"Singapore"}]
[{"country":"Ireland"},{"country":"Ireland"},{"country":"Ireland"},{"country":"Ireland"}]
But I want it to show up like this
{"country": "Singapore",
"country": "United States"
"country": "Ireland"
}
My CSV File looks like this
www.google.com
www.bing.com
www.pokemon.com
www.yahoo.com
And here is my code
package main
import (
"encoding/csv"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"log"
"net/http"
"os"
)
func closeFile(f *os.File) {
err := f.Close()
if err != nil {
fmt.Fprintf(os.Stderr, "error: %v\n", err)
os.Exit(1)
}
}
func main() {
m := make(map[string]string)
result := []map[string]string{}
csvFile, err :=
os.Open("test.csv")
if err != nil {
log.Fatal(err)
}
defer closeFile(csvFile)
reader := csv.NewReader(csvFile)
for {
line, err := reader.Read()
if err == io.EOF {
break
} else if err != nil {
log.Fatal(err)
}
response, err := http.Get(fmt.Sprintf("http://ip-api.com/json/%s?fields=org", line[0]))
if err != nil {
fmt.Println(err)
defer response.Body.Close()
} else {
data, _ := ioutil.ReadAll(response.Body)
err := json.Unmarshal(data, &m)
if err != nil {
panic(err)
}
result = append(result, m)
rest, _ := json.Marshal(result)
fmt.Println(string(rest)) **
}
}
}
I feel like the issue is I'm missing a for: range loop to compile everything before printing but I would love any feedback to sort this issue out.
It's because Go maps have pointer semantics, they are not values:
Map types are reference types, like pointers or slices.
After a map is created with make before the loop is started and is appended to the list within the loop, it is essentially a pointer to the same underlying data stored in the list multiple times.
Then, Unmarshal won't create a new map, it'll reuse the same pointer, which means overwriting the previously retrieved result.
So, the fix
Recreate the map on every iteration. Just move the m := make(map[string]string) line inside the loop, right before the API call.

Undefined behaviour while loading a large CSV concurrently using Goroutines

I am trying to load a big CSV file using goroutines using Golang. The dimension of the csv is (254882, 100). But using my goroutines when I am parsing the csv and storing it into an 2D list, I am getting rows lesser than 254882 and the number is varying for each run. I feel it is happening due goroutines but can't seem to point the reason. Can anyone please help me. I am also new in Golang. Here is my code below
func loadCSV(csvFile string) (*[][]float64, error) {
startTime := time.Now()
var dataset [][]float64
f, err := os.Open(csvFile)
if err != nil {
return &dataset, err
}
r := csv.NewReader(bufio.NewReader(f))
counter := 0
var wg sync.WaitGroup
for {
record, err := r.Read()
if err == io.EOF {
break
}
if counter != 0 {
wg.Add(1)
go func(r []string, dataset *[][]float64) {
var temp []float64
for _, each := range record {
f, err := strconv.ParseFloat(each, 64)
if err == nil {
temp = append(temp, f)
}
}
*dataset = append(*dataset, temp)
wg.Done()
}(record, &dataset)
}
counter++
}
wg.Wait()
duration := time.Now().Sub(startTime)
log.Printf("Loaded %d rows in %v seconds", counter, duration)
return &dataset, nil
}
And my main function looks like the following
func main() {
// runtime.GOMAXPROCS(4)
dataset, err := loadCSV("AvgW2V_train.csv")
if err != nil {
panic(err)
}
fmt.Println(len(*dataset))
}
If anyone needs to download the CSV too, then click the link below (485 MB)
https://drive.google.com/file/d/1G4Nw6JyeC-i0R1exWp5BtRtGM1Fwyelm/view?usp=sharing
Go Data Race Detector
Your results are undefined because you have data races.
~/gopath/src$ go run -race racer.go
==================
WARNING: DATA RACE
Write at 0x00c00008a060 by goroutine 6:
runtime.mapassign_faststr()
/home/peter/go/src/runtime/map_faststr.go:202 +0x0
main.main.func2()
/home/peter/gopath/src/racer.go:16 +0x6a
Previous write at 0x00c00008a060 by goroutine 5:
runtime.mapassign_faststr()
/home/peter/go/src/runtime/map_faststr.go:202 +0x0
main.main.func1()
/home/peter/gopath/src/racer.go:11 +0x6a
Goroutine 6 (running) created at:
main.main()
/home/peter/gopath/src/racer.go:14 +0x88
Goroutine 5 (running) created at:
main.main()
/home/peter/gopath/src/racer.go:9 +0x5b
==================
fatal error: concurrent map writes
==================
WARNING: DATA RACE
Write at 0x00c00009a088 by goroutine 6:
main.main.func2()
/home/peter/gopath/src/racer.go:16 +0x7f
Previous write at 0x00c00009a088 by goroutine 5:
main.main.func1()
/home/peter/gopath/src/racer.go:11 +0x7f
Goroutine 6 (running) created at:
main.main()
/home/peter/gopath/src/racer.go:14 +0x88
Goroutine 5 (running) created at:
main.main()
/home/peter/gopath/src/racer.go:9 +0x5b
==================
goroutine 34 [running]:
runtime.throw(0x49e156, 0x15)
/home/peter/go/src/runtime/panic.go:608 +0x72 fp=0xc000094718 sp=0xc0000946e8 pc=0x44b342
runtime.mapassign_faststr(0x48ace0, 0xc00008a060, 0x49c9c3, 0x8, 0xc00009a088)
/home/peter/go/src/runtime/map_faststr.go:211 +0x46c fp=0xc000094790 sp=0xc000094718 pc=0x43598c
main.main.func1(0x49c9c3, 0x8)
/home/peter/gopath/src/racer.go:11 +0x6b fp=0xc0000947d0 sp=0xc000094790 pc=0x47ac6b
runtime.goexit()
/home/peter/go/src/runtime/asm_amd64.s:1340 +0x1 fp=0xc0000947d8 sp=0xc0000947d0 pc=0x473061
created by main.main
/home/peter/gopath/src/racer.go:9 +0x5c
goroutine 1 [sleep]:
time.Sleep(0x5f5e100)
/home/peter/go/src/runtime/time.go:105 +0x14a
main.main()
/home/peter/gopath/src/racer.go:19 +0x96
goroutine 35 [runnable]:
main.main.func2(0x49c9c3, 0x8)
/home/peter/gopath/src/racer.go:16 +0x6b
created by main.main
/home/peter/gopath/src/racer.go:14 +0x89
exit status 2
~/gopath/src$
racer.go:
package main
import (
"bufio"
"encoding/csv"
"fmt"
"io"
"log"
"os"
"strconv"
"sync"
"time"
)
func loadCSV(csvFile string) (*[][]float64, error) {
startTime := time.Now()
var dataset [][]float64
f, err := os.Open(csvFile)
if err != nil {
return &dataset, err
}
r := csv.NewReader(bufio.NewReader(f))
counter := 0
var wg sync.WaitGroup
for {
record, err := r.Read()
if err == io.EOF {
break
}
if counter != 0 {
wg.Add(1)
go func(r []string, dataset *[][]float64) {
var temp []float64
for _, each := range record {
f, err := strconv.ParseFloat(each, 64)
if err == nil {
temp = append(temp, f)
}
}
*dataset = append(*dataset, temp)
wg.Done()
}(record, &dataset)
}
counter++
}
wg.Wait()
duration := time.Now().Sub(startTime)
log.Printf("Loaded %d rows in %v seconds", counter, duration)
return &dataset, nil
}
func main() {
// runtime.GOMAXPROCS(4)
dataset, err := loadCSV("/home/peter/AvgW2V_train.csv")
if err != nil {
panic(err)
}
fmt.Println(len(*dataset))
}
There is no need to use *[][]float64 as that would be a double pointer.
I have made some minor modifications to your program.
dataset is available to new goroutine, since it's declared in it's above block of code.
Similarly record is also available, but since record variable, is changing from time to time, we need to pass it to new goroutine.
While there is no need to pass dataset, as it is not changing and that is what we want, so that we can append temp to dataset.
But race condition happens when multiple goroutines are trying to append to same variable, i.e., multiple goroutines are trying to write to same variable.
So we need to make sure that only one can goroutine can add at any instance of time.
So we use a lock to make appending sequential.
package main
import (
"bufio"
"encoding/csv"
"fmt"
"os"
"strconv"
"sync"
)
func loadCSV(csvFile string) [][]float64 {
var dataset [][]float64
f, _ := os.Open(csvFile)
r := csv.NewReader(f)
var wg sync.WaitGroup
l := new(sync.Mutex) // lock
for record, err := r.Read(); err == nil; record, err = r.Read() {
wg.Add(1)
go func(record []string) {
defer wg.Done()
var temp []float64
for _, each := range record {
if f, err := strconv.ParseFloat(each, 64); err == nil {
temp = append(temp, f)
}
}
l.Lock() // lock before writing
dataset = append(dataset, temp) // write
l.Unlock() // unlock
}(record)
}
wg.Wait()
return dataset
}
func main() {
dataset := loadCSV("train.csv")
fmt.Println(len(dataset))
}
Some errors were not handled to make it minimal, but you should handle errors.

Concurrently write multiple csv files from one, splitting on a partition column in Golang

My objective is to read one or multiple csv files that share a common format, and write to separate files based on a partition column in the csv data. Please allow that the last column is the partition, that data is un-sorted, and a given partition can be found in multiple files. Example of one file:
fsdio,abc,def,2017,11,06,01
1sdf9,abc,def,2017,11,06,04
22df9,abc,def,2017,11,06,03
1d243,abc,def,2017,11,06,02
If this approach smells like the dreaded XY Problem, I'm happy to adjust.
What I've tried so far:
Read in the data set and iterate over each line
If the partition has
been seen, spin off a new worker routine (this will contain a file/csv
writer). Send the line into a chan []string.
As each worker is a file writer, it should only receive lines for exactly one partition over it's input channel.
This obviously doesn't work (yet), as I'm not aware of how to send a line to the correct worker based on the partition value seen on a given line.
I've given each worker an id string for each partition value, but am not aware how to select that worker to send to, if I should be creating a separate chan []string for each worker and send to that channel with a select, or if perhaps a struct should hold each worker with some sort of pool and routing functionality.
TLDR; I'm lost as to how to conditionally send data to a given go routine or channel based on some categorical string value, where the number of unique's can be arbitrary, but likely does not exceed 24 unique partition values.
I will caveat by stating I've noticed questions like this do get down-voted, so if you feel this is counter-constructive or incomplete enough to down-vote, please comment with why so I can avoid repeating the offense.
Thanks for any help in advance!
Playground
Snippet:
package main
import (
"encoding/csv"
"fmt"
"log"
"strings"
"time"
)
func main() {
// CSV
r := csv.NewReader(csvFile1)
lines, err := r.ReadAll()
if err != nil {
log.Fatalf("error reading all lines: %v", err)
}
// CHANNELS
lineChan := make(chan []string)
// TRACKER
var seenPartitions []string
for _, line := range lines {
hour := line[6]
if !stringInSlice(hour, seenPartitions) {
seenPartitions = append(seenPartitions, hour)
go worker(hour, lineChan)
}
// How to send to the correct worker/channel?
lineChan <- line
}
close(lineChan)
}
func worker(id string, lineChan <-chan []string) {
for j := range lineChan {
fmt.Println("worker", id, "started job", j)
// Write to a new file here and wait for input over the channel
time.Sleep(time.Second)
fmt.Println("worker", id, "finished job", j)
}
}
func stringInSlice(str string, list []string) bool {
for _, v := range list {
if v == str {
return true
}
}
return false
}
// DUMMY
var csvFile1 = strings.NewReader(`
12fy3,abc,def,2017,11,06,04
fsdio,abc,def,2017,11,06,01
11213,abc,def,2017,11,06,02
1sdf9,abc,def,2017,11,06,01
2123r,abc,def,2017,11,06,03
1v2t3,abc,def,2017,11,06,01
1r2r3,abc,def,2017,11,06,02
g1253,abc,def,2017,11,06,02
d1e23,abc,def,2017,11,06,02
a1d23,abc,def,2017,11,06,02
12jj3,abc,def,2017,11,06,03
t1r23,abc,def,2017,11,06,03
22123,abc,def,2017,11,06,03
14d23,abc,def,2017,11,06,04
1d243,abc,def,2017,11,06,01
1da23,abc,def,2017,11,06,04
a1523,abc,def,2017,11,06,01
12453,abc,def,2017,11,06,04`)
Synchronous version no go concurrent magic first (see concurrent version below).
package main
import (
"encoding/csv"
"fmt"
"io"
"log"
"strings"
)
func main() {
// CSV
r := csv.NewReader(csvFile1)
partitions := make(map[string][][]string)
for {
rec, err := r.Read()
if err != nil {
if err == io.EOF {
err = nil
save_partitions(partitions)
return
}
log.Fatal(err)
}
process(rec, partitions)
}
}
// prints only
func save_partitions(partitions map[string][][]string) {
for part, recs := range partitions {
fmt.Println(part)
for _, rec := range recs {
fmt.Println(rec)
}
}
}
// this can also write/append directly to a file
func process(rec []string, partitions map[string][][]string) {
l := len(rec)
part := rec[l-1]
if p, ok := partitions[part]; ok {
partitions[part] = append(p, rec)
} else {
partitions[part] = [][]string{rec}
}
}
// DUMMY
var csvFile1 = strings.NewReader(`
fsdio,abc,def,2017,11,06,01
1sdf9,abc,def,2017,11,06,01
1d243,abc,def,2017,11,06,01
1v2t3,abc,def,2017,11,06,01
a1523,abc,def,2017,11,06,01
1r2r3,abc,def,2017,11,06,02
11213,abc,def,2017,11,06,02
g1253,abc,def,2017,11,06,02
d1e23,abc,def,2017,11,06,02
a1d23,abc,def,2017,11,06,02
12jj3,abc,def,2017,11,06,03
t1r23,abc,def,2017,11,06,03
2123r,abc,def,2017,11,06,03
22123,abc,def,2017,11,06,03
14d23,abc,def,2017,11,06,04
1da23,abc,def,2017,11,06,04
12fy3,abc,def,2017,11,06,04
12453,abc,def,2017,11,06,04`)
https://play.golang.org/p/--iqZGzxCF
And the concurrent version:
package main
import (
"encoding/csv"
"fmt"
"io"
"log"
"strings"
"sync"
)
var (
// list of channels to communicate with workers
// workers accessed synchronousely no mutex required
workers = make(map[string]chan []string)
// wg is to make sure all workers done before exiting main
wg = sync.WaitGroup{}
// mu used only for sequential printing, not relevant for program logic
mu = sync.Mutex{}
)
func main() {
// wait for all workers to finish up before exit
defer wg.Wait()
r := csv.NewReader(csvFile1)
for {
rec, err := r.Read()
if err != nil {
if err == io.EOF {
savePartitions()
return
}
log.Fatal(err) // sorry for the panic
}
process(rec)
}
}
func process(rec []string) {
l := len(rec)
part := rec[l-1]
if c, ok := workers[part]; ok {
// send rec to worker
c <- rec
} else {
// if no worker for the partition
// make a chan
nc := make(chan []string)
workers[part] = nc
// start worker with this chan
go worker(nc)
// send rec to worker via chan
nc <- rec
}
}
func worker(c chan []string) {
// wg.Done signals to main worker completion
wg.Add(1)
defer wg.Done()
part := [][]string{}
for {
// wait for a rec or close(chan)
rec, ok := <-c
if ok {
// save the rec
// instead of accumulation in memory
// this can be saved to file directly
part = append(part, rec)
} else {
// channel closed on EOF
// dump partition
// locks ensures sequential printing
// not a required for independent files
mu.Lock()
for _, p := range part {
fmt.Printf("%+v\n", p)
}
mu.Unlock()
return
}
}
}
// simply signals to workers to stop
func savePartitions() {
for _, c := range workers {
// signal to all workers to exit
close(c)
}
}
// DUMMY
var csvFile1 = strings.NewReader(`
fsdio,abc,def,2017,11,06,01
1sdf9,abc,def,2017,11,06,01
1d243,abc,def,2017,11,06,01
1v2t3,abc,def,2017,11,06,01
a1523,abc,def,2017,11,06,01
1r2r3,abc,def,2017,11,06,02
11213,abc,def,2017,11,06,02
g1253,abc,def,2017,11,06,02
d1e23,abc,def,2017,11,06,02
a1d23,abc,def,2017,11,06,02
12jj3,abc,def,2017,11,06,03
t1r23,abc,def,2017,11,06,03
2123r,abc,def,2017,11,06,03
22123,abc,def,2017,11,06,03
14d23,abc,def,2017,11,06,04
1da23,abc,def,2017,11,06,04
12fy3,abc,def,2017,11,06,04
12453,abc,def,2017,11,06,04`)
https://play.golang.org/p/oBTPosy0yT
Have fun!