Ok, here is a strange one.
For testing this issue I have a endpoint like this:
[HttpPut]
public string Put(object value)
{
return value != null ? "ok" : "fail";
}
When throwing data directly at it the value object allways becomes the JSON i throw at it (as expected). However the app I'm developing requires me to go through a gateway that adds security to the call by verifying some headers and so.
When going through the gateway that forwards to my endpoint everything also seems ok until the body of the call hits a certain size(apperantly 471 chars is ok 472 is not) then the value object is null (and the therefore the method returns 'fail').
All this led me to believe that the GW truncates the body at a certain size and therefore makes it invalid JSON and the value object becomes null. BUT after talking to the provider of the gateway they told me that they tested it and this could not be the issue.
Here comes the really strange part
In my further pursuit of the problem I added a BeginRequest listener like this:
context.BeginRequest += Application_BeginRequest;
I added only one line to the interceptor for debugging purposes (to see if the body got truncated):
HttpContext.Current.Request.SaveAs("c:\\test.txt", true);
After adding this line everything works like a charm, all call gets thru (regardless of size) and value is never null. I tried removing the line again and we are back to the original issue where it fails at a certain size.
What on earth is going on
I really need some advice here on how to proceed debugging this issue.
Turned out to be an issue in the 3rd party GW anyways. The are now trying to fix the problem.
Related
I am currently reading Async Javascript by Trevor Burnham. This has been a great book so far.
He talks about this snippet and console.log being 'async' in the Safari and Chrome console. Unfortunately I can't replicate this. Here is the code:
var obj = {};
console.log(obj);
obj.foo = 'bar';
// my outcome: Object{}; 'bar';
// The book outcome: {foo:bar};
If this was async, I would anticipate the outcome to be the books outcome. console.log() is put in the event queue until all code is executed, then it is ran and it would have the bar property.
It appears though it is running synchronously.
Am I running this code wrong? Is console.log actually async?
console.log is not standardized, so the behavior is rather undefined, and can be changed easily from release to release of the developer tools. Your book is likely to be outdated, as might my answer soon.
To our code, it does not make any difference whether console.log is async or not, it does not provide any kind of callback or so; and the values you pass are always referenced and computed at the time you call the function.
We don't really know what happens then (OK, we could, since Firebug, Chrome Devtools and Opera Dragonfly are all open source). The console will need to store the logged values somewhere, and it will display them on the screen. The rendering will happen asynchronously for sure (being throttled to rate-limit updates), as will future interactions with the logged objects in the console (like expanding object properties).
So the console might either clone (serialize) the mutable objects that you did log, or it will store references to them. The first one doesn't work well with deep/large objects. Also, at least the initial rendering in the console will probably show the "current" state of the object, i.e. the one when it got logged - in your example you see Object {}.
However, when you expand the object to inspect its properties further, it is likely that the console will have only stored a reference to your object and its properties, and displaying them now will then show their current (already mutated) state. If you click on the +, you should be able to see the bar property in your example.
Here's a screenshot that was posted in the bug report to explain their "fix":
So, some values might be referenced long after they have been logged, and the evaluation of these is rather lazy ("when needed"). The most famous example of this discrepancy is handled in the question Is Chrome's JavaScript console lazy about evaluating arrays?
A workaround is to make sure to log serialized snapshots of your objects always, e.g. by doing console.log(JSON.stringify(obj)). This will work for non-circular and rather small objects only, though. See also How can I change the default behavior of console.log in Safari?.
The better solution is to use breakpoints for debugging, where the execution completely stops and you can inspect the current values at each point. Use logging only with serialisable and immutable data.
This isn't really an answer to the question, but it might be handy to someone who stumbled on this post, and it was too long to put in a comment:
window.console.logSync = (...args) => {
try {
args = args.map((arg) => JSON.parse(JSON.stringify(arg)));
console.log(...args);
} catch (error) {
console.log('Error trying to console.logSync()', ...args);
}
};
This creates a pseudo-synchronous version of console.log, but with the same caveats as mentioned in the accepted answer.
Since it seems like, at the moment, most browsers' console.log's are asynchronous in some manner, you may want to use a function like this in certain scenarios.
When using console.log:
a = {}; a.a=1;console.log(a);a.b=function(){};
// without b
a = {}; a.a=1;a.a1=1;a.a2=1;a.a3=1;a.a4=1;a.a5=1;a.a6=1;a.a7=1;a.a8=1;console.log(a);a.b=function(){};
// with b, maybe
a = {}; a.a=function(){};console.log(a);a.b=function(){};
// with b
in the first situation the object is simple enough, so console can 'stringify' it then present to you; but in the other situations, a is too 'complicated' to 'stringify' so console will show you the in memory object instead, and yes, when you look at it b has already be attached to a.
I've got an application that's been working for a long time.
Recently we created a new app/keys for it, and it's behaving strangely.
(I did figure out the scope requirements had been put in place. I am requesting bucket:create bucket:read data:read data:write).
When I upload a file to a bucket, I've traditionally called done the call to get the object details afterwards, to verify that it's successfully uploaded.
With the new key, I am intermittently getting this error:
GetObjectDetails: InternalServerError {"fault":{"faultstring":"Execution of ServiceCallout servicecallout-auth-acm-request failed. Reason: timeout occurred servicecallout-auth-acm-request","detail":{"errorcode":"steps.servicecallout.ExecutionFailed"}}}
Is this something I should be re-trying with a sleep in between? or is it indicative of something wrong with the upload?
(FYI - putting in a retry seems to have have resolved this for me, but I still don't know if that's the right answer - and if this issue might happen on other calls).
It could be that the service requires a slight delay between a put object and a get, so I would suggest either use a timer or a retry as you mentioned. However a successful response from the upload should be enough to ensure your object has been placed to the bucket without the need to double check.
In my application I have dynamic field sets on what is otherwise the same form. I can load them from the server as javascript includes and that works OK.
However, it would be much better to be able to load them from a separate API.
$.getJSON() provides a good way to load the json but I have not found the right place to do this. Clearly it needs to be completed before the compile step begins.
I see there is a fieldTransform facility in formly. Could this be used to transform vm.fields from an empty object to whatever comes in from the API?
If so how would I do that?
Thx. Paul
There is an example on the website that does exactly what you're asking about. It uses $timeout to simulate an async operation to load the field configuration, but you could just as easily use angular's own $http to get the json from the server. It hides the form behind an ng-if and only shows the form when the fields return (when ng-if resolves to true, it compile the template).
Thx #kent
OK, so we need to replace the getFields() promise with this
function getFields() {
return $http.get('fields-demo.json', {headers:{'Cache-Control':'no-cache'}});
}
This returns data.fields so in vm.loadingData we say
vm.fields = result[0].data;
Seems to work for OK for me.
When testing I noticed that you have to make sure there is nothing wrong with your json such as using a field type you haven't defined. In that case the resulting error message is not very clear.
Furthermore you need to deal with the situation where the source of the data is unavailable. I tried this:
function getFields() {
console.log('getting',fields_url);
return $http.get(fields_url, {headers: {'Cache-Control':'no-cache'}}).
error(function() {
alert("can't get fields from server");
//return new Promise({status:'fields server access error'}); //??
});
.. which does at least throw the alert. However, I'm not sure how to replace the promise so as to propagate the error back to the caller.
Paul
I have an air (4.5.1) mobile project that send an ArrayCollection to the server (Tomcat/BlazeDS)
The server manage the object and return a string containing the result (ok/error/etc)..
Everything worked fine, until:
I tried to send an ArrayCollection with length > 35000 (not sure border limit).
After sending the arraycollection the UI seems like frozen for a little time, and after that
I got a FaultEvent Error
NetConnection.Call.Failed: HTTP: Failed
The server however received the request, parsed it and returned the result string
So, because the program get the faultevent, I cannot be sure (from the client) that the request is finished correctly...
How can I fix it? and is this problem generated by the length of the arraycollection?
Other ideas?
Thanks
This is an on going issue with Flex/Air/Flash. The problem you are running into is a defualt value for requestTimeout of 30 seconds. Even if you change the value in your remoteObject, it is not getting used correctly. There are MANY MANY documented bugs on adobe regarding this issue. Below is a link to a site that has collected some info about this problem from around the web. To date adobe has yet to fix the problem even though that claim they have in previous versions.
RemoteObject Issue
I have installed the sfErrorNotifierPlugin. When both options reportErrors/reportPHPErrors reportPHPWarnings/reportWarnings are set to false, everything is ok. But I want to catch PHP exceptions and warnings to receive E-mails, but then all my tasks fail, including clear-cache. After few hours of tests I'm 100% sure that the problem is with set_exception_handler/set_error_handler.
There's a similar question:
sfErrorNotifierPlugin on symfony task but the author there is having problems with a custom task. In my case, even built-in tasks fail.
I haven't used sfErrorNotifierPlugin, but I have run into 'The “default” context does not exist.' messages before. It happens when a call is made to sfContext::getInstance() and the context simply doesn't exist. I've had this happen a lot from within custom tasks. One solution is to add sfContext::createInstance() before the call to sfContext::getInstance(). This will ensure that a context exists.
There's an interesting blog post on 'Why sfContext::getInstance() is bad' that goes into more detail - http://webmozarts.com/2009/07/01/why-sfcontextgetinstance-is-bad/
Well, the problem could not be solved this way, unfortunately. Using sfErrorNotifierPlugin, I have enabled reporting PHP warning/errors (apart from symfony exceptions) and this resulted in huge problems, e.g. built-in tasks such as clear-cache failed.
The solution I chose was to load the plugin only in non-task mode (project configuration class):
public function setup()
{
$this->enableAllPluginsExcept('sfPropelPlugin');
if ('cli' == php_sapi_name()) $this->disablePlugins('sfErrorNotifierPlugin');
}
WHen a task is executed, everything works normally. When an app is fired from the browser, emails are sent when exception/warning occurs (maybe someone will find it useful).
Arms has explained the problem correctly. But usually context does not exist when executing backend/maintenance tasks on the console. And it is easier if you handle the condition yourself.
Check, if you really need the context?
If you do, what exactly do you need it for?
Sometimes you only want a user to populate a created_by field. You can work around by hard-coding a user ID.
If you want to do something more integrated, create a page (which will have a context) and trigger the task from there.
you can test the existance of the instance before doing something inside a class. Like:
if(sfContext::hasInstance())
$this->microsite_id = sfContext::getInstance()->getUser()->getAttribute('active_microsite');
I've been experiencing the same problem using the plugin sfErrorNotifier.
In my specific case, I noticed a warning was raised:
Warning: ob_start(): function '' not found or invalid function name in /var/www/ncsoft_qa/lib/vendor/symfony/lib/config/sfApplicationConfiguration.class.php on line 155
Notice: ob_start(): failed to create buffer in /var/www/ncsoft_qa/lib/vendor/symfony/lib/config/sfApplicationConfiguration.class.php on line 155
So, checking the file: sfApplicationConfiguration.class.php class, line 155,
I've replaced the ' ' for a null, then the warnings disappears, and also the error!
ob_start(sfConfig::get('sf_compressed') ? 'ob_gzhandler' : ''); bad
ob_start(sfConfig::get('sf_compressed') ? 'ob_gzhandler' : null); good