kaazing websocket cross browser support configuration - html

Does anyone know how to setup kaazing websockets for use with IE and firefox? My app works great in Safari and Chrome but I can't get it to work with these two browsers. It just gives me a WebSocket is not defined error.
According to their website, I thought all you needed was to add:
<head>
<meta name="kaazing:postMessageBridgeURL"
content="http://www.example.com/bridge/PostMessageBridge.html" >
</head>
But I put the file there and still, it is failing.

You only need that tag you mentioned if you intend to do cross-origin connectivity in IE6 and IE7.
But if you just want basic WebSocket to work in those browsers the thing you need to do is include the WebSocket.js or ByteSocket.js files. In summary, if you intend to use text messages, then put this in your HTML page:
<script src="/html5/WebSocket.js"></script>
If you intend to use binary messages, then do this instead:
<script src="/html5/ByteSocket.js"></script>
This is more fully explained in the documentation here for typical usages of WebSocket:
https://kaazing.com/getting-started/
Regards,
Robin Zimmermann
Kaazing Product Management

Which version of the gateway are you using?
The step you mentioned is only required if you are integrating with another web server (like Apache). Is that what you're doing?

Related

Angular CLI - selective usage of script's crossorigin attribute

To sketch out my problem: We have an angular app, which is served by basic auth protected server. This app is supposed to run among others in IE11 and Safari 12. And we have differential build to cover for es5 and es2015+ environments.
Now when it comes to build-produced index.html, Safari 12 needs to have crossorigin attribute on <script type="module"> there in order to send BA credentials to the BA-protected server (although the client and server domain match). Without the attribute, it doesn't send them and the request fails with 401. It turns out, that some older browsers, like Safari 12, implement older version of spec for type=module script fetching, which doesn't send the auth credentials automatically. Newer browsers implement newer version of the spec and the credentials are sent correctly (that's why Safari 13 works for me).
So my fix for this issue was to set crossorigin="anonymous" attribute via angular.json. This causes every build to produce <script>s for both es5 and es2015 bundles with this attribute, which fixed the issue in Safari 12. So far so good.
But then I discovered, that IE11 doesn't handle this attribute well. In fact, the crossorigin attr is not supported in IE11, which shouldn't make a difference, but it doesn't send the credentials when sending request for script, regardless of its type="", which naturally fails with 401 again. When I remove the crossorigin attribute, it fixes the issue in IE11, but guess what, breaks Safari 12 again.
With all that in mind, the logical solution would be to add crossorigin only to es2015 scripts (for Safari 12) and leave es5 scripts intact (for IE11). This is more difficult than it sounds though, because as it seems, angular-cli doesn't have the option to distinguish based on ES target. It just provides an option to set crossorigin on all produced scripts.
And this is where the actual questions come in:
Is there an easy way to do this with angular-cli? Outside of writing some kind of a node script that does this for me.
Am I perhaps approaching this problem from a wrong angle? Is there a better way to solve this credential sending problem?
And bonus question: Why isn't IE11 able to handle the (for it) unknown crossorigin attribute? I thought unknown attributes are just ignored.

Can we do web push notifications in chrome without using GCM/FCM?

I am trying to do web push notifications in Chrome without using GCM/FCM. Is it possible? I'm not able to find examples on how to use a different push service.
No, it is not possible to use another push service.
In Firefox, you can do it by modifying the dom.push.serverURL preference, but obviously you'd need privileged access to alter the value of the pref.
There are third-party services that you can use to implement push notifications, but they will use the Web Push API under the hood (so Autopush on Firefox, GCM/FCM on Chrome).
Yes. Using VAPID spec and service worker you can use web push notifications without FCM/GCM. For more information please look into below google docs.
https://developers.google.com/web/fundamentals/engage-and-retain/push-notifications/how-push-works
I have used Using VAPID for WebPush. This works in Firefox and IE Edge browser. But not mail in Chrome browser.
Again in Firefox action seems to be not working. Whereas in IE Edge, notification actions buttons will work
It can be done using Service Workers. It's new w3c feature.
I've not tried it yet, but you can have a look at it:
https://developers.google.com/web/fundamentals/getting-started/codelabs/push-notifications/
It's not compatible with all browsers. Ref.: http://caniuse.com/#feat=serviceworkers
good grief the advice here is DISGUSTINGLY bad
yes you can do it using https websockets and also a Microsoft project called SignalR which doesn't even "need" browser support, i.e it will work in javascript no matter what
the reason I mention SignalR is that is DEGRADES the mechanic to the bet fit to ensure it works whatever the weather.. tools they use are
from old sckoole long polling
all the way up to WebSockets under the covers when it's available
(and gracefully fall back to other techniques and technologies when it isn't, while the application code remains the same)

Does rel="preload" work with an HTTP server?

If I have a link I'm trying to preload, e.g.
<link rel="preload" href="http://example.com/example.js">
and I don't know if example.com is an HTTP or HTTP2 server. Does the preload specification do anything if it's only HTTP?
Yes preload can be used by both.
Some servers use the preload header to implement HTTP/2 push. See here for an explanation for Apache HTTP/2 push and here for how Cloudflare are doing this in a similar manner on a customised version of Nginx.
However there are also many use cases for using it outside of HTTP/2 by the browser. For example to preload fonts from other domains (HTTP/2 push obviously only works from your server) - that doesn't need HTTP/2 but just needs browser support - not great at the moment as basically just Chrome and Opera but WebKit (used by Safari) have just implemented it so should be rolled out soon enough though Mozilla/Firefox have not yet and neither have Microsoft/Edge.
Smashing Mag have a great article on this subject and why browser usage and HTTP/2 server usage complement each other and are useful for different use cases.

Youtube https embedding causes warning in Firefox

I'm working on a site that requires a login and includes embedded Youtube videos. Because of the login, I need to get SSL working, which it largely is. I'm hitting an unexpected problem with the Youtube embeds, though. It's easy enough to point at https://www.youtube.com, but Firefox still complains that there's unencrypted content on an encrypted page. According to Firebug, the only unencrypted load was from http://[stuff].youtube.com/videoplayback?[more stuff].
Now, it's perfectly understandable that Youtube doesn't want the overhead of encrypting their video streams, and I don't think that this poses an actual security vulnerability. I just need to keep the browser happy. (I know that that warning can be disabled, of course, but I can't do that on my users' machines.) There must be a way to do this, because https://www.youtube.com itself doesn't make this error pop up, even though it uses http: for the video streams, too.
I have not seen similar errors in other browsers, but I haven't looked very hard just yet.
If it matters, my development machine doesn't have a valid SSL certificate; I just added an exception.
If you are using <iframe> use <embed> or check other embedding code options that YouTube API provides.
I have a ssl secured website and this works for me in Firefox
<iframe id="player" src="https://www.youtube.com/embed/XfI....Ctpo?enablejsapi=1&origin=https://yourdomain.com&showinfo=0&iv_load_policy=3&modestbranding=1&theme=light&color=white&rel=0" frameborder="0"></iframe>
I don't have a solution, but a suggestion instead: Are you sure not having a valid SSL certificate couldn't have something to do with this? You wouldn't think so, but you never know. If you get one, and it still doesn't work, it's not something you wouldn't have had to do anyway. I went through the process of obtaining/installing and configuring SSL key(s) and certificates for my home server, and every little thing seems to have an impact on how SSL acts/reacts.
Also, have you tried accessing the site outside of the local network it's on? It sounds like you're on the same network as the server which is hosting the site (the one that has SSL installed), which can create problems itself because of NAT traversal (I believe, but correct me if I'm wrong - we're all here to learn). Sometimes with HTTPS, you can have a problem connecting to resources within the local network, that people on the internet would have no problem at all connecting to. Just my two cents.. and sorry for any incorrect info, if I provided any. Take this all with a grain of salt, but hopefully you'll find the answer to your problem. Things like this can be a pain in the rump.
There may not be anything you can do about this, also.. because Youtube seems to not provide content over HTTPS... which is out of your control. I know you don't contest the error you're being given, and just want a workaround, however.
BTW, I think their homepage is HTTPS enabled, just not their video content...so that's why embedding the homepage wouldn't produce the error.
EDIT: Also, I see someone else wrote to use embed instead of iframes, which I would also recommend. The browser treats iframes like another page, but the error your getting indicates the unsecure content is actually combined with the secure content, so everything should be fine with that... but you never know.
Try the page with the Firefox "inspector / network ananlysis" (shift-ctrl-I) to analyze what elements are requested. I guess it's some javascript INSIDE the that you don't have under your control. In any case you should be able to pinpoint the specific trigger with this tool.
Check wheter it makes a difference when switching your browser to HTML5 instead of Flash for the video or vice versa. YouTube recently changed the default protocol to HTML5.
Is it possible by your website design that you could try fetching the youtube videos by an http call instead of an https? I don't know the layout of your site, but if you're just wanting it to stop complaining, that should do it.
That said, youtube DOES have valid https certificates, but that's due to the google integration. Since you aren't google, you wouldn't read as the valid certificate holder when accessing youtube's content (that's the exact kind of thing SSL's are meant to guard against).
So, basically, if you can, just embed via http instead of https. YOUR site can still be https, just not the call to youtube.
Please remove http then u check..
for example
<iframe id="player" src="www.youtube.com/embed/XfI....Ctpo?enablejsapi=1&origin=https://yourdomain.com&showinfo=0&iv_load_policy=3&modestbranding=1&theme=light&color=white&rel=0" frameborder="0"></iframe>
just remove ( http or https ) with colon, it will work perfectly
example
<iframe id="player" src="//www.youtube.com/embed/XfI....Ctpo?enablejsapi=1&origin=https://yourdomain.com&showinfo=0&iv_load_policy=3&modestbranding=1&theme=light&color=white&rel=0" frameborder="0"></iframe>
A much simpler way to do this is to download the video itself and then link to it locally on your server e.g save it in the same directory as your page and then just link to it there.

Preventing secure/insecure errors by using protocol relative URLs for image source

Is anyone aware of whether it is problematic to use protocol relative URLs for an image source to prevent mixed content security warnings.
For example linking an image like:
<img src="//domain.com/img.jpg" />
instead of:
<img src="http://domain.com/img.jpg" />
or
<img src="https//domain.com/img.jpg" />
In my testing i've not seen anything to suggest this is wrong but i'm not sure if it has edge cases where it will create problems.
EDIT i've seen it throw errors when using PHP's getimagesize function.
Found an interesting gotcha for the use of protocol relative URLs:
You have to be careful to only use
this syntax in pages destined for
browsers. If you put it in an email,
there will be no base page URL to use
in resolving the relative URL. In
Outlook at least, this URL will be
interpreted as a Windows network file,
not what you intended.
from here
Essentially though there are no valid reasons why this shouldn't work as long as the request is made by a browser and not an external email client.
more info from here:
A relative URL without a scheme (http:
or https:) is valid, per RTF 3986:
Section 4.2. If a client chokes on it,
then it's the client's fault because
they're not complying with the URI
syntax specified in the RFC.
Your example is valid and should work.
I've used that relative URL method
myself on heavily trafficked sites and
have had zero complaints. Also, we
test our sites in Firefox, Safari,
IE6, IE7 and Opera. These browsers all
understand that URL format
IE 7 and IE 8 will download stylesheets twice if you're using a protocol-relative URL. That won't affect you if you only use it "for an image source", but just in case.
The following should be considered when using Protocol-Relative URLs:
1) All modern browsers support this feature.
2) We have to be sure that the requested resource is accessible over both HTTP and HTTPS. If HTTP redirects to HTTPS it is fine, but here the load time will take a little longer than if the request was made directly to the HTTPS.
3) Internet Explorer 6 does not support this feature.
4) Internet Explorer 7 and 8 support the feature, but they will download a stylesheet twice if protocol-relative URLs are used for the css files.