I am writing a large json (~2.7 MB) to graphql response using a custom-defined gqlgen scalar I named "Bytes". The HTTP response takes about ~60 ms.
func MarshalBytes(b Bytes) graphql.Marshaler {
return graphql.WriterFunc(func(w io.Writer) {
var str = b.ToString()
io.WriteString(w, str)
})
}
I tried accessing the gin middleware from the resolver to write the same json to the gin context (reference: https://github.com/99designs/gqlgen/blob/master/docs/content/recipes/gin.md).
b, _ := jsoniter.Marshal(bundles)
ginCtx.Writer.Write(b)
This takes ~15ms only which is much faster and similar to the performance of the gin REST endpoint.
Why does writing to the context using gqlgen have a performance hit?
Related
We have an ASP.NET MVC application in which we need to send back json response. The content for sending this json response is coming from a PipeReader.
The approach we have taken is to read all the contents from the PipeReader using ReadAsync and convert the byte array to a string and write it as base64 string.
Here is the code sample:
List<byte> bytes = new List<byte>();
try
{
while (true)
{
ReadResult result = await reader.ReadAsync();
ReadOnlySequence<byte> buffer = result.Buffer;
bytes.AddRange(buffer.ToArray());
reader.AdvanceTo(buffer.End);
if (result.IsCompleted)
{
break;
}
}
}
finally
{
await reader.CompleteAsync();
}
byte[] byteArray = bytes.ToArray();
var base64str = Convert.ToBase64String(byteArray);
We have written a JsonConverter which does the conversion to json. The JsonConverter has a reference to the Utf8JsonWriter instance and we write using the WriteString method on the Utf8JsonWriter.
The above approach requires us to read the entire content in memory from the pipereader and then write to the Utf8JsonWriter.
Instead we want to read a sequence of bytes from the pipereader, convert to utf8 and write it immediately. We do not want to convert the entire content in memory before writing.
Is that even feasible ? I don't know if we can do utf8 conversion in chunk instead of doing it all in one go.
The main reason for this is that the content coming from PipeReader can be large and so we want to do some kind of streaming instead of converting to string in memory and then write to the Json output.
I'm making an JSON API wrapper client that needs to fetch paginated results, where the URL to the next page is provided by the previous page. To reduce code duplication for the 100+ entities that share the same response format, I would like to have a single client method that fetches and unmarshalls the different entities from all paginated pages.
My current approach in a simplified (pseudo) version (without errors etc):
type ListResponse struct {
Data struct {
Results []interface{} `json:"results"`
Next string `json:"__next"`
} `json:"d"`
}
func (c *Client) ListRequest(uri string) listResponse ListResponse {
// Do a http request to uri and get the body
body := []byte(`{ "d": { "__next": "URL", "results": []}}`)
json.NewDecoder(body).Decode(&listResponse)
}
func (c *Client) ListRequestAll(uri string, v interface{}) {
a := []interface{}
f := c.ListRequest(uri)
a = append(a, f.Data.Results...)
var next = f.Data.Next
for next != "" {
r := c.ListRequest(next)
a = append(a, r.Data.Results...)
next = r.Data.Next
}
b, _ := json.Marshal(a)
json.Unmarshal(b, v)
}
// Then in a method requesting all results for a single entity
var entities []Entity1
client.ListRequestAll("https://foo.bar/entities1.json", &entities)
// and somewehere else
var entities []Entity2
client.ListRequestAll("https://foo.bar/entities2.json", &entities)
The problem however is that this approach is inefficient and uses too much memory etc, ie first Unmarshalling in a general ListResponse with results as []interface{} (to see the next URL and concat the results into a single slice), then marshalling the []interface{} for unmarshalling it directly aftwards in the destination slice of []Entity1.
I might be able to use the reflect package to dynamically make new slices of these entities, directly unmarshal into them and concat/append them afterwards, however if I understand correctly I better not use reflect unless strictly necessary...
Take a look at the RawMessage type in the encoding/json package. It allows you to defer the decoding of json values until later. For example:
Results []json.RawMessage `json:"results"`
or even...
Results json.RawMessage `json:"results"`
Since json.RawMessage is just a slice of bytes this will be much more efficient then the intermediate []interface{} you are unmarshalling to.
As for the second part on how to assemble these into a single slice given multiple page reads you could punt that question to the caller by making the caller use a slice of slices type.
// Then in a method requesting all results for a single entity
var entityPages [][]Entity1
client.ListRequestAll("https://foo.bar/entities1.json", &entityPages)
This still has the unbounded memory consumption problem your general design has, however, since you have to load all of the pages / items at once. You might want to consider changing to an Open/Read abstraction like working with files. You'd have some Open method that returns another type that, like os.File, provides a method for reading a subset of data at a time, while internally requesting pages and buffering as needed.
Perhaps something like this (untested):
type PagedReader struct {
c *Client
buffer []json.RawMessage
next string
}
func (r *PagedReader) getPage() {
f := r.c.ListRequest(r.next)
r.next = f.Data.Next
r.buffer = append(r.buffer, f.Data.Results...)
}
func (r *PagedReader) ReadItems(output []interface{}) int {
for len(output) > len(buffer) && r.next != "" {
r.getPage()
}
n := 0
for i:=0;i<len(output)&&i< len(r.buffer);i++ {
json.Unmarshal(r.buffer[i], output[i] )
n++
}
r.buffer = r.buffer[n:]
return n
}
I need to transfer a MongoDB query to a different system. For this reason I would like to use the MongoDB Extended JSON. I need this to be done mostly because I use date comparisons in my queries.
So, the kernel of the problem is that I need to transfer a MongoDB query that has been generated in a node.js back-end to another back-end written in Go language.
Intuitively, the most obvious format for sending this query via REST, is JSON. But, MongoDB queries are not exactly JSON, but BSON, which contains special constructs for dates.
So, the idea is to convert the queries into JSON using MongoDB Extended JSON as form of representation of the special constructs. After some tests it's clear that these queries do not work. Both the MongoDB shell and queries sent via node.js's need the special ISODate or new Date constructs.
Finally, the actual question: are there functions to encode/decode from JSON to BSON, taking into account MongoDB Extended JSON, both in JavaScript (node.js) and Go language?
Updates
Node.js encoding package
Apparently there is a node.js package that parses and stringifies BSON/JSON.
So, half of my problem is resolved. I wonder if there is something like this in Go language.
Sample query
For example, the following query is in normal BSON:
{ Tmin: { $gt: ISODate("2006-01-01T23:00:00.000Z") } }
Translated into MongoDB Extended JSON, it becomes:
{ "Tmin": { "$gt" : { "$date" : 1136156400000 }}}
After some research I found the mejson library, however it's for Marshaling only, so I decided to write an Unmarshaller.
Behold ejson (I wrote it), right now it's a very simple ejson -> bson converter, there's no bson -> ejson yet, you can use mejson for that.
An example:
const j = `{"_id":{"$oid":"53c2ab5e4291b17b666d742a"},"last_seen_at":{"$date":1405266782008},"display_name":{"$undefined":true},
"ref":{"$ref":"col2", "$id":"53c2ab5e4291b17b666d742b"}}`
type TestS struct {
Id bson.ObjectId `bson:"_id"`
LastSeenAt *time.Time `bson:"last_seen_at"`
DisplayName *string `bson:"display_name,omitempty"`
Ref mgo.DBRef `bson:"ref"`
}
func main() {
var ts TestS
if err := ejson.Unmarshal([]byte(j), &ts); err != nil {
panic(err)
}
fmt.Printf("%+v\n", ts)
//or to convert the ejson to bson.M
var m map[string]interface{}
if err := json.Unmarshal([]byte(j), &m); err != nil {
t.Fatal(err)
}
err := ejson.Normalize(m)
if err != nil {
panic(err)
}
fmt.Printf("%+v\n", m)
}
I have a io.Reader which I get from http.Request.Body that reads a JSON byte slice from a server.
I would like to stream this to json.NewDecoder. However I would also like to intercept the JSON before it hits json.NewDecoder and substitute certain parts of it. For example, the JSON string contains empty hashes "{}" which I would like to remove due to a bug in the server's JSON output.
I am currently achieving my goal using json.Unmarshal but not using the JSON streaming parser:
data, _ := ioutil.ReadAll(r.Body)
data = bytes.Replace(data, []byte("{}"), "", -1)
json.Unmarshal(data, [my struct])
How can I achieve the same thing as above but using json.NewDecoder so I can save the many times the above code has to parse through r.Body's data? Here's some code using a pseudo function ReplaceStream(r io.Reader, old, new []byte):
reader := ReplaceStream(r.Body, []byte("{}"), "")
dec := json.NewDecoder(reader)
dec.Decode([my struct])
I know ReplaceStream might be fairly trivial to make, but is there anything in the standard library to do this that I am unaware of?
My advice is to just treat that kind of message as a special case and avoid the extra parsing / substituting for all the other requests
data, _ := ioutil.ReadAll(r.Body)
// FIXME: overcome bug #12312 of json server
if data == `{"list": [{}]}` {
return []
}
// Normal datastruct ..
I'm writing an API which retrieves Mongo documents and return those documents as a JSON response.
I can certainly do this by creating a struct with the proper field mappings, but since i'm not processing these documents, I simply want to convert the raw data I get from the code below to JSON. My API will then return the JSON as a response.
I have the following code:
var raw []bson.Raw
err = myCollection.Find(
bson.M{"name": name},
).All(&raw)
I want to convert raw to JSON. How would I do that? Is there a better of this this other than by starting to create a bson.Raw?
Tech stack:
Go 1.1
mgo v1 http://godoc.org/labix.org/v1/mgo
bson v1 http://godoc.org/labix.org/v1/mgo/bson
Thanks.
Unmarshal it into maps instead:
var maps []bson.M
err = myCollection.Find(bson.M{"name": name}).All(&maps)
This way you can provide these same maps to the encoding/json package's Marshal function.