Problems while parsing a json file coming from google speech server - json

I am trying to implement google speech recognition api into my program. To do this I use the function below to parse json file sended from google server. However program sometimes works well sometimes not and gets access violation error
The code is below. Where is problem? is there anyway to check whether the json object is true format or not before parsing it?
function TGoogleSpeech.Convert(const stream: TStream): string;
var
ret: string;
js: TlkJSONobject;
begin
try
ret := FHttp.Post(FURL, stream);
js := TlkJSON.ParseText(ret) as TlkJSONobject;
try
Result := js.Field['hypotheses'].Child[0].Field['utterance'].Value;
finally
js.Free;
end;
except
Result := '';
end;
end;
I am using the uLKJSOn library.

guys I found the answer:
And it is really interesting,
when recognition is not good Googleserver sends this code as json
{"status":5,"id":"","hypotheses":[]}
it doesn't have utterance field and that cause problem.
Therefore I did this update and clean the unwanted part using the code below
ret :=ansireplacetext(ret,'{"status":5,"id":"","hypotheses":[]}','');
if (AnsiContainsText(ret,'utterance') and (length(ret)>1) then
and so on..

Related

Do I need to free instance of TJSONArray

The answer is obviously yes, but I have this code that has been running for almost a month now 24/7 and everything is fine. Here is the code :
var
jsonArray : TJSONArray;
jsonValue : TJSONValue;
json : string;
begin
json := 'JSON_MASTER';
jsonArray := TJSONObject.ParseJSONValue(TEncoding.UTF8.GetBytes(json), 0) as TJSONArray;
for jsonValue in jsonArray do
begin
// do the thing 1
end;
json := 'JSON_DETAIL';
jsonArray := TJSONObject.ParseJSONValue(TEncoding.UTF8.GetBytes(json), 0) as TJSONArray;
for jsonValue in jsonArray do
begin
// do the thing 2
end;
end;
The application is a web service(SOAP). The function is executed around 2K per day. I am aware of the issue in the code but because the SOAP is not crushing I'm not fixing it yet. The task manager performance report shows fine. There is no sign of growing memory usage. Why is there no sign of memory leaks? Is there such thing as garbage collection in TJSONArray?
It depends on which platform you are running on.
If your app is running on a non-ARC platform, such as Windows or OSX, then YES, you need to manually free the TJSONArray when you are done using it, or else it will be leaked.
If your app is running on an ARC platform, such as iOS, Android, or Linux, then NO, you do not need to manually free the TJSONArray when you are done using it, as it will be freed automatically when all references to it have gone out of scope.

JSON never a valid return type. Not even tutorial code

So I've been at this for many hours. I'm creating a JSON export script that generates a JSON object from a sheet to be used by another web app.
No matter how I return it, it will always, 100% of the time, tell me the following:
The script completed but the returned value is not a supported return type.
Even by just copy pasting tutorial code, like the following, or as below:
var myDog = {"name": "Rhino", "breed": "pug", "age": 8}
var myJSON = JSON.stringify(myDog);
function doGet(request){
return ContentService.createTextOutput(myJSON).setMimeType(ContentService.MimeType.JSON);
}
It will always return me this error. Strangely, clicking the test button returns the entire object just fine in all cases..
Your script looks good. Make sure you have published a new version of the web app after changing the code.

IdHTTP.Get(url, ss) give 403 Forbidden

I'm using IdHTTP to execute php files on server. Worked fine for years. Suddenly getting 403 Forbidden errors with all my programs. Archived versions from year ago now fail also. Web host says they have changed nothing. To test, placed a simple php file that simply echoes a value on 3 separate host platforms (none SSL). Calls to all 3 fail with 403 error. If the url is placed in a browser address and called from there, call succeeds with expected value returned. Also tried running program connected via different ISPs. These failures just popped up in the last few days. Happens on many different computers.
Here is a very simple example that fails when sent to all 3 test servers
procedure TForm1.Button1Click(Sender: TObject);
var url: string;
H: TIdHttp;
SS: TStringStream;
begin
url := 'http://www.somesite.com/test.php';
H := TIdHttp.Create(nil);
SS := TStringStream.Create;
try
H.Get(url, SS);
Edit1.Text := SS.DataString;
finally
H.Free;
SS.Free;
end;
end;
Any help greatly appreciated.

How cookies are managed by networking code

I was recently migrating over from C# and looking to create some of my old applications. As such I have needed to find a way to manage sessions within Go web requests. I found a solution in the form of this code:
// Jar is session object struct - cookie jar including mutex for syncing
type Jar struct {
sync.Mutex
cookies map[string][]*http.Cookie
}
// NewJar is a function for creating cookie jar for use
func NewJar() *Jar {
jar := new(Jar)
jar.cookies = make(map[string][]*http.Cookie)
return jar
}
// SetCookies sets the cookies for the jar
func (jar *Jar) SetCookies(u *url.URL, cookies []*http.Cookie) {
jar.Lock()
if _, ok := jar.cookies[u.Host]; ok {
for _, c := range cookies {
jar.cookies[u.Host] = append(jar.cookies[u.Host], c)
}
} else {
jar.cookies[u.Host] = cookies
}
jar.Unlock()
}
// Cookies returns cookies for each host
func (jar *Jar) Cookies(u *url.URL) []*http.Cookie {
return jar.cookies[u.Host]
}
// NewJarClient creates new client, utilising a NewJar()
func NewJarClient() *http.Client {
proxyURL, _ := url.Parse("http://127.0.0.1:8888")
tr := &http.Transport{
MaxIdleConns: 10,
IdleConnTimeout: 30 * time.Second,
DisableCompression: true,
Proxy: http.ProxyURL(proxyURL),
}
return &http.Client{
Jar: NewJar(),
Transport: tr,
}
}
The problem I'm having is in understanding how this works. I create a client doing the following
client := NewJarClient()
but then when I issue networking fuctions using it such as a get request, the cookies automatically carry on and it all works as planned. The problem is Ihave no idea why. I see no mention of methods such as the Cookies one or the SetCookies one ever being called and it seems to just handle each one by magically running the functions. Could someone annotate or explain the given methods line by line or in a way so that they'd make better sense to me coming over from a C# background. Thanks :)
NewJar allocates and returns a new instance of type *Jar, now type *Jar, thanks to the methods defined on it, implements the interface called CookieJar, implicitly.
The http.Client type has a field called Jar which is defined as having the type CookieJar, that means that you can set http.Client.Jar to anything that implements the CookieJar interface, including the *Jar type. The NewJarClient function returns a new *http.Client instance with it's Jar field set to the *Jar instance returned by NewJar.
This allows the returned client value to use *Jar's methods without really knowing that it's a *Jar, it only knows that the value in its Jar field has the same set of methods as those defined by the CookieJar interface.
So the http.Client instance, when sending requests, uses your *Jar by calling its methods providing the parameters and handling the returned values. How your *Jar is used by the client is an implementation detail of the http.Client type and you don't have to worry about that. You just need to make sure that the methods that implement the CookieJar interface do what you want them to do, how and when they are called is up to the client.
But if you're interested in the implementation details of the client anyway, you can check out the source file of http.Client.
Due to misinformation in the form of a few dated blog posts, I came to the impression that I was unable to maintain cookies across requests in go - for some weird reason. Having thought that, I researched and looked into creating my own implementation which ca be seen above. It's been brought to my attention that my implementation is completely broken and flawed and that the standard http library itself can perfectly handle maintaining cookies, simply by including a value for the Jar when creating a client. For example:
jar, _ := cookiejar.New(nil)
proxyURL, _ := url.Parse("http://127.0.0.1:8888")
tr := &http.Transport{
MaxIdleConns: 10,
IdleConnTimeout: 30 * time.Second,
DisableCompression: true,
Proxy: http.ProxyURL(proxyURL),
}
c := &http.Client{
Jar: jar,
Transport: tr,
}

Is delegating JSON.parse to web worker worthwile (in Chrome Extension/FF Addon)?

I am writing a Chrome Extension which stores great deal of data in browser's localStorage and parses it on every page load. Now as the size of data increases, page load time/performance starts degrading. So I thought delegating the parsing to a web worker. But I am doubtful if it is worthwile. What I can do is pass my string to be parsed to worker like this.
worker.postMessage(localStorage['myObj']);
And I plan to parse this string into JSON and send it back to the main thread, like so
worker.onMessage(function(myObj){
//Then Play around with my object here.
});
But as I googled on the performance aspect of this method, including message posting and listening overheads, and the fact that some browser's don't allow sending JSON objects in the message and some serialize it automatically while sending, I doubt if this method is worthwile.
Since my app is just a Chrome Extension and also a Firefox Addon, I am concerned with just these two browsers. Can anyone suggest me if this method is suitable for these two browsers?
The currently-accepted answer is simply incorrect. It's entirely feasible to do what you asked about, example below.
Whether it's worth doing is something you'll have to check with real data in your real scenario. There's overhead associated with sending the JSON text to the worker and having it send back the parsed result (which may well involve some under-the-covers serialization, though it's probably not JSON), and modern browsers parse JSON very quickly.
I suspect the overhead isn't worth it, but perhaps on huge JSON strings, if the browser's native serialization mechanism is either much faster than JSON or takes place on a thread other than the main UI thread, perhaps it could be.
Example of using a worker to parse JSON:
// This stands in for 'worker.js':
var blob = new Blob([
'this.onmessage = function(message) {\n' +
'postMessage(JSON.parse(message.data));\n' +
'};'
], { type: "text/javascript" });
var workerUrl = URL.createObjectURL(blob);
// Main script:
var w = new Worker(workerUrl/*"worker.js"*/);
w.onmessage = function(message) {
display("Got response: typeof message.data: " + typeof message.data);
display("message.data.for = " + message.data.foo);
};
display('Posting JSON \'{"foo":"bar"}\' to worker');
w.postMessage('{"foo":"bar"}');
function display(msg) {
var p = document.createElement('p');
p.innerHTML = String(msg);
document.body.appendChild(p);
}
body {
font-family: monospace;
}
Result:
Posting JSON '{"foo":"bar"}' to worker
Got response: typeof message.data: object
message.data.for = bar
Only strings, not objects, can be passed to and from WebWorkers. If you parse a string into a JSON object within a WebWorker, you will need to stringify the object and then reparse it when passing it from the worker to your main script. Obviously, this will cause the JSON to be double-parsed unnecessarily and so is a bad idea.
2017 Update: More than just strings are allowed now. See some of the (much newer) answers and comments for reference.