I have a function that returns an interface{}. How can I serialize this into a JSON Array without "hardcoding" the fields in a struct.
I am using https://github.com/jmoiron/jsonq to return the interface.
json.Unmarshal(resp.Bytes(), &response)
data := map[string]interface{}{}
dec := json.NewDecoder(strings.NewReader(resp.String()))
dec.Decode(&data)
jq := jsonq.NewQuery(data)
results, err := jq.Array("results")
if err != nil {
log.Fatalln("Unable to get results: ", err)
}
if len(results) == 0 {
return nil
}
return results // this is returning an interface{}
a json string can always be unmarshalled to map[string]interface{}. That is what you need to work with then.
I should have checked the type I was dealing with. I found out by:
fmt.Println(reflect.TypeOf(results))
which returned: []interface {}
I was then able to iterate over it by using:
for _, event:= range results {
v, err := json.MarshalIndent(event, "", " ")
if err != nil {
fmt.Println("error:", err)
}
fmt.Println(string(v))
}
Related
I have a JSON array with objects in Redis that I want to loop through it, but when I fetch the data, the type is interface{}, so I cannot range over type interface{}
array := redis.Do(ctx, "JSON.GET", "key")
arrayResult, e := array.Result()
if e != nil {
log.Printf("could not get json with command %s", e)
}
for _, i := range arrayResult {
fmt.Printf(i)
}
I believe you should be able to do
for _, i := range arrayResult.([]byte) {
// do work here
}
Thank you guys, I found a solution. so at first I needed to convert the arrayResult to byte. Then I unmarshal it into a strcut, so now I am able to range over it.
array := redis.Do(ctx, "JSON.GET", "key")
arrayResult, e := array.Result()
if e != nil {
log.Printf("could not get json with command %s", e)
}
byteKey := []byte(fmt.Sprintf("%v", arrayResult.(interface{})))
RedisResult := struct{}
errUnmarshalRedisResult := json.Unmarshal(byteKey, &RedisResult)
if errUnmarshalRedisResult != nil {
log.Printf("cannot Unmarshal msg %s", errUnmarshalRedisResult)
}
for _, i := range RedisResult {
fmt.Printf(i)
}
I need to parse really long json file (more than million items). I don't want to load it to the memory and read it chunk by chunk. There's a good example with the array of items here. The problem is that I deal with the map. And when I call Decode I get not at beginning of value.
I can't get what should be changed.
const data = `{
"object1": {"name": "cattle","location": "kitchen"},
"object2": {"name": "table","location": "office"}
}`
type ReadObject struct {
Name string `json:"name"`
Location string `json:"location"`
}
func ParseJSON() {
dec := json.NewDecoder(strings.NewReader(data))
tkn, err := dec.Token()
if err != nil {
log.Fatalf("failed to read opening token: %v", err)
}
fmt.Printf("opening token: %v\n", tkn)
objects := make(map[string]*ReadObject)
for dec.More() {
var nextSymbol string
if err := dec.Decode(&nextSymbol); err != nil {
log.Fatalf("failed to parse next symbol: %v", err)
}
nextObject := &ReadObject{}
if err := dec.Decode(&nextObject); err != nil {
log.Fatalf("failed to parse next object")
}
objects[nextSymbol] = nextObject
}
tkn, err = dec.Token()
if err != nil {
log.Fatalf("failed to read closing token: %v", err)
}
fmt.Printf("closing token: %v\n", tkn)
fmt.Printf("OBJECTS: \n%v\n", objects)
}
TL,DR: when you are calling Token() method for a first time, you move offset from the beginning (of a JSON value) and therefore you get the error.
You are working with this struct (link):
type Decoder struct {
// others fields omits for simplicity
tokenState int
}
Pay attention for a tokenState field. This value could be one of (link):
const (
tokenTopValue = iota
tokenArrayStart
tokenArrayValue
tokenArrayComma
tokenObjectStart
tokenObjectKey
tokenObjectColon
tokenObjectValue
tokenObjectComma
)
Let's back to your code. You are calling Token() method. This method obtains first JSON-valid token { and changes tokenState from tokenObjectValue to the tokenObjectStart (link). Now you are "in-an-object" state.
If you try to call Decode() at this point you will get an error (not at beginning of value). This is because allowed states of tokenState for calling Decode() are tokenTopValue, tokenArrayStart, tokenArrayValue, tokenObjectValue, i.e. "full" value, not part of it (link).
To avoid this you can just don't call Token() at all and do something like this:
dec := json.NewDecoder(strings.NewReader(dataMapFromJson))
objects := make(map[string]*ReadObject)
if err := dec.Decode(&objects); err != nil {
log.Fatalf("failed to parse next symbol: %v", err)
}
fmt.Printf("OBJECTS: \n%v\n", objects)
Or, if you want to read chunk-by-chunk, you could keep calling Token() until you reach "full" value. And then call Decode() on this value (I guess this should work).
After consuming the initial { with your first call to dec.Token(), you must :
use dec.Token() to extract the next key
after extracting the key, you can call dec.Decode(&nextObject) to decode an entry
example code :
for dec.More() {
key, err := dec.Token()
if err != nil {
// handle error
}
var val interface{}
err = dec.Decode(&val)
if err != nil {
// handle error
}
fmt.Printf(" %s : %v\n", key, val)
}
https://play.golang.org/p/5r1d8MsNlKb
I'm attempting to implement testing with golden files, however, the JSON my function generates varies in order but maintains the same values. I've implemented the comparison method used here:
How to compare two JSON requests?
But it's order dependent. And as stated here by brad:
JSON objects are unordered, just like Go maps. If
you're depending on the order that a specific implementation serializes your JSON
objects in, you have a bug.
I've written some sample code that simulated my predicament:
package main
import (
"bufio"
"encoding/json"
"fmt"
"io/ioutil"
"math/rand"
"os"
"reflect"
"time"
)
type example struct {
Name string
Earnings float64
}
func main() {
slice := GetSlice()
gfile, err := ioutil.ReadFile("testdata/example.golden")
if err != nil {
fmt.Println(err)
fmt.Println("Failed reading golden file")
}
testJSON, err := json.Marshal(slice)
if err != nil {
fmt.Println(err)
fmt.Println("Error marshalling slice")
}
equal, err := JSONBytesEqual(gfile, testJSON)
if err != nil {
fmt.Println(err)
fmt.Println("Error comparing JSON")
}
if !equal {
fmt.Println("Restults don't match JSON")
} else {
fmt.Println("Success!")
}
}
func GetSlice() []example {
t := []example{
example{"Penny", 50.0},
example{"Sheldon", 70.0},
example{"Raj", 20.0},
example{"Bernadette", 200.0},
example{"Amy", 250.0},
example{"Howard", 1.0}}
rand.Seed(time.Now().UnixNano())
rand.Shuffle(len(t), func(i, j int) { t[i], t[j] = t[j], t[i] })
return t
}
func JSONBytesEqual(a, b []byte) (bool, error) {
var j, j2 interface{}
if err := json.Unmarshal(a, &j); err != nil {
return false, err
}
if err := json.Unmarshal(b, &j2); err != nil {
return false, err
}
return reflect.DeepEqual(j2, j), nil
}
func WriteTestSliceToFile(arr []example, filename string) {
file, err := os.OpenFile(filename, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
if err != nil {
fmt.Println("failed creating file: %s", err)
}
datawriter := bufio.NewWriter(file)
marshalledStruct, err := json.Marshal(arr)
if err != nil {
fmt.Println("Error marshalling json")
fmt.Println(err)
}
_, err = datawriter.Write(marshalledStruct)
if err != nil {
fmt.Println("Error writing to file")
fmt.Println(err)
}
datawriter.Flush()
file.Close()
}
JSON arrays are ordered. The json.Marshal function preserves order when encoding a slice to a JSON array.
JSON objects are not ordered. The json.Marshal function writes object members in sorted key order as described in the documentation.
The bradfitz comment JSON object ordering is not relevant to this question:
The application in the question is working with a JSON array, not a JSON object.
The package was updated to write object fields in sorted key order a couple of years after Brad's comment.
To compare slices while ignoring order, sort the two slices before comparing. This can be done before encoding to JSON or after decoding from JSON.
sort.Slice(slice, func(i, j int) bool {
if slice[i].Name != slice[j].Name {
return slice[i].Name < slice[j].Name
}
return slice[i].Earnings < slice[j].Earnings
})
For unit testing, you could use assert.JSONEq from Testify. If you need to do it programatically, you could follow the code of the JSONEq function.
https://github.com/stretchr/testify/blob/master/assert/assertions.go#L1551
I have two json inputs built this way
"count: 1 result: fields"
I would like to concatenate the fields that I find within result without using a defined structure. I have tried in many ways but most of the time the result is an error about the type Interface {} or the last map overwritten the data
I would like both the "result" and the first and second map fields to be merged within the result in output.
oracle, err := http.Get("http://XXX:8080/XXXX/"+id)
if err != nil {
panic(err)
}
defer oracle.Body.Close()
mysql, err := http.Get("http://XXX:3000/XXX/"+id)
if err != nil {
panic(err)
}
defer mysql.Body.Close()
oracleJSON, err := ioutil.ReadAll(oracle.Body)
if err != nil {
panic(err)
}
mysqlJSON, err := ioutil.ReadAll(mysql.Body)
if err != nil {
panic(err)
}
var oracleOUT map[string]interface{}
var mysqlOUT map[string]interface{}
json.Unmarshal(oracleJSON, &oracleOUT)
json.Unmarshal(mysqlJSON, &mysqlOUT)
a := oracleOUT["result"]
b := mysqlOUT["result"]
c.JSON(http.StatusOK, gin.H{"result": ????})
this is an example of json
{"count":1,"result":{"COD_DIPENDENTE":"00060636","MATRICOLA":"60636","COGNOME":"PIPPO"}}
If i have two json like this the result of the function it should be
`"result":{"COD_DIPENDENTE":"00060636","MATRICOLA":"60636","COGNOME":"PIPPO","COD_DIPENDENTE":"00060636","MATRICOLA":"60636","COGNOME":"PIPPO"}}`
The output you are looking for is not valid JSON. However with a small change you can output something very similar to your example that is valid JSON.
You probably do want to use a defined structure for the portion of the input that has a known structure, so that you can extract the more abstract "result" section more easily.
If you start at the top of the input structure using a map[string]interface{} then you'll have to do a type assertion on the "result" key. For example:
var input map[string]interface{}
err = json.Unmarshal(data, &input)
if err != nil {
return err
}
keys, ok := input["result"].(map[string]interface{})
if !ok {
return errors.New("wasn't the type we expected")
}
However if you used a defined structure for the top level you can do it like the following which feels much cleaner.
type Input struct {
Count int `json:"count"`
Result map[string]interface{} `json:"result"`
}
var input Input
err = json.Unmarshal(data, &input)
if err != nil {
return err
}
// from here you can use input.Result directly without a type assertion
To generate output that has duplicate keys, you could use an array of objects with a single key/value pair in each, then you end up with a valid JSON structure that does not overwrite keys. Here's how to do that (playground link):
package main
import (
"encoding/json"
"fmt"
)
type Input struct {
Count int `json:"count"`
Result map[string]interface{} `json:"result"`
}
type Output struct {
Count int `json:"count"`
Result []map[string]interface{} `json:"result"`
}
var inputdata = [][]byte{
[]byte(`{"count":1,"result":{"COD_DIPENDENTE":"00060636", "MATRICOLA":"60636", "COGNOME":"PIPPO"}}`),
[]byte(`{"count":1,"result":{"COD_DIPENDENTE":"00060636", "MATRICOLA":"60636", "COGNOME":"PIPPO"}}`),
}
func main() {
inputs := make([]Input, len(inputdata))
for i := range inputs {
err := json.Unmarshal(inputdata[i], &inputs[i])
if err != nil {
panic(err)
}
}
var out Output
out.Count = len(inputs)
for _, input := range inputs {
for k, v := range input.Result {
out.Result = append(out.Result, map[string]interface{}{k: v})
}
}
outdata, _ := json.Marshal(out)
fmt.Println(string(outdata))
}
Which produces output that looks like this when formatted:
{
"count": 2,
"result": [
{"MATRICOLA": "60636"},
{"COGNOME": "PIPPO"},
{"COD_DIPENDENTE": "00060636"},
{"COGNOME": "PIPPO"},
{"COD_DIPENDENTE": "00060636"},
{"MATRICOLA": "60636"}
]
}
I'd like to make a json out of a hash received from redis using redigo:
func showHashtags(c *gin.Context) {
hashMap, err := redis.StringMap(conn.Do("HGETALL", MyDict))
if err != nil {
fmt.Println(err)
}
fmt.Println(hashMap) //works fine and shows the map
m := make(map[string]string)
for k, v := range hashMap {
m[k] = v
}
jmap, _ := json.Marshal(m)
c.JSON(200, jmap)
}
However the result in browser is gibberish like:
"eyIgIjoiMiIsIjExX9iq24zYsSAiOiIxIiwiQWxsNFJhbWluICI6IjEiLCJCSUhFICI6IjMiLCJCVFNBUk1ZICI6IjIiLCJDTUJZTiAiOiIxI....
What is wrong here? How can I fix it?
The variable jmap is type []byte. The call to JSON encoder in c.JSON() marshals []byte as a base64 encoded string as you see in the output.
To fix the problem, use one level of JSON encoding by passing the map directly to c.JSON:
hashMap, err := redis.StringMap(conn.Do("HGETALL", MyDict))
if err != nil {
// handle error
}
m := make(map[string]string)
for k, v := range hashMap {
m[k] = v
}
c.JSON(200, m)
Because hashMap is a map[string]string, you can use it directly:
hashMap, err := redis.StringMap(conn.Do("HGETALL", MyDict))
if err != nil {
// handle error
}
c.JSON(200, hashMap)