I want our app to show the online help page (so it's always up to date) or even a local page. However, it's likely to be blocked by the Firewall (Zone Alarm).
BTW, I tested this with Zone Alarm. It blocked access to a local .html file as well as to an .asp file on the internet. (I.e., tried to display a page in Internet Explorer and got the Zone Alarm dialog asking if I wanted to give permission to display
Is there a way around this?
Perhaps displaying the web page in the Web Browser Control?
It's actually very unlikely that web traffic is blocked at the firewall (unless you mean the file type is blocked?). What you may need to do in such a setting, however, is use the same proxy that IE uses, because direct traffic may be blocked.
The simplest way to do that is to use a high level windows API or IE itself, and HTTP download the latest helpfile if there is a new one - these mechanisms should know about any proxy.
Of course, your users may not be using IE, even if most are. So you might need to allow the user to specify the proxy, or be able to auto configure the proxy in the same way that the browser does it.
edit: I see you mean zonealarm is part of the problem. yes, that is tricky as you will have to either get your application 'blessed' centrally by whoever manages zonealarm in the customer organisation, or (if there is no central management) then the user will have to allow the app to communicate. Perhaps you should bite the bullet and have the online help simply be a website, and spawn the preferred browser via 'executing' the URL as suggested in another answer.
If the web browser isn't blocked the firewall then they probably open port 8080 for any app and thus your app shouldn't be blocked.
If the firewall only allowed port 8080 to IE; you would have to punch a hole in the firewall to use a new browser like firefox or chrome.
To open a web page using the user's preferred browser (with appropriate proxy and authentication settings), use something like ShellExecute with the URL of the document to load. Something like this would do it (where page is the URL to load):
HINSTANCE r = ShellExecute(NULL, "open", page, NULL, NULL, SW_SHOWNORMAL);
Related
Almost all useful extensions require permission to access and modify all data on a page.
We can't be sure that a chrome extension is malicious in the sense if it's leaking my data or not.
I realise that many extensions which I use for example the great suspender, even though it needs access to all site data, it doesn't need to communicate with outside world.
Is there a way to block specific chrome extensions from making any network requests at all. ( can we block all outgoing/incoming traffic to a chrome extension. )
I can't keep monitoring a extension 24/7 to see when is it leaking data, For all you know it could be leaking once a month.
No, there's no way to block just the network communication of an extension without blocking its site access (aka "host permissions") entirely. That's because a malicious extension can open a tab with its controlling site (or a hidden iframe in the background script) and insert js code as a standard DOM script which the browser will attribute to the page itself so it'll be able to communicate with the site's domain to upload the exfiltrated data.
So, what you can do practically is to protect the most sensitive sites you use from all extensions by adding a local ExtensionSettings policy with runtime_blocked_hosts that contains that site(s). This will prevent all extensions from accessing the entire site either via content scripts or network requests. Example: {"*": {"runtime_blocked_hosts": ["*://lastpass.com"]}}. And if you have an extension you trust then you can relax this rule for that extension by using runtime_allowed_hosts. See the policy link above for more examples.
I have the following case in a web application of mine. The usual browser that the user uses is Chrome.
I use digital certificates that users have cryptographic cards that they insert into a card reader.
To log in to the application, basically users access the https link that makes the certificate data read.
So far everything works fine.
If the user to end his session of the application closes the browser, there is no problem. Everything is over.
But if the user wants to leave his application session, without closing all browser windows, here are my problems.
There is a button that closes the session of the application, the user leaves and redirects to the initial login screen. It seems that everything has been reset, because the user has left. But when the new user wants to log in and press the link to read the certificate data, instead of doing a new reading of the new card, use the data from the previous card without just asking for the pin to access it.
The problem goes further, for example, if the user has forgotten the card, the card and tries to logarize, the failure to read the certificate. But now, although inserted correctly, the card will not be read again until the browser is restarted, which maintains a cache that does not have a certificate.
At the moment only the solution was found by closing all Chrome windows, but that depends on whether the user does or not.
A partial solution would be sure to close the browser with javascript () but for some time, it can not be closed with javascript (window.close ()), a window that can not be opened from the site itself, with what is available I think it's ruled out
Can someone contribute to me? Thank you
Chrome and the rest of browsers maintain a cache of the SSL authentications performed and decide when to prompt user for selecting a certificate. There is no "logout" function neither the connection can be closed from server side due to TLS resumption protocol ( client can resume the session)
This a common and known issue when defining an authentication system using client certificates. I only have found a workaround: use different domains to force browser to choose a certificate:
login.domain.com
-->login1.domain.com
-->login2.domain.com
-->loginN.domain.com
You have a virtual authentication URL login.domain.com which redirects user's browser to a random loginN.domain.com every time you need an authentication. Chrome will detect that it is a different domain and will prompt user for selecting a certificate
You could also think about using different ports instead of different DNS, but then you could have problems with the user's firewall because you are not using a standard port, and in this case Firefox does not show the window either.
I'm working on an AIR app that logs a user in to a remote website. At certain points during the session the user may need to open a page in their browser. When they do that they are not logged in according to the browser so the user must login again. I'm trying to login them in through the browser when they login in the application.
I've read that AIR can manage cookies. I think it's doing that but I'm not sure. Is there a way to share cookies with the browser? Is that what manage cookies setting does?
If none of that is happening could I create a mx:HTML instance or stage web view and double login with that? A stage webview should be using the system browser correct? The same browser that will launch when navigateToURL() is called.
UPDATE:
It looks like cookies are shared across browsers except in a few cases such as Firefox and Linux. Update again, cookies are shared less often than initially thought. It looks like I might be able to login a user by creating a StageWebView instance. I will have to double check to make sure it's a default browser and not the internal webkit.
UGH. It looks like StageWebView on the desktop uses the internal webkit. There is a useNative property though. But even if I can use a native system browser I'm not sure how to log someone in with it because I don't think I can post to it? I think I can only set the URL which would be a get...
...It looks like I can create a post request and then use navigateToURL() to load that request. It would be hacky but it might work.
ARG. It looks like AIR doesn't support post through navigateToURL().
I don't know why you want to complicate things by thinking just to use POST ?! You can use GET by sending some temporary identifier ( token, hash, ... ), like some websites do with their user's newsletter when they give you the possibility to log in just by clicking a simple link in that newsletter, which will be generated by your server side script after that your user has been successfully identified, then when the user opens that link in the browser you can verify that information and then create your cookies ...
Hope that can help.
My site is using HTTPS only.
I allow using BBCodes to show images. Users are placing images like "https://imagehoster.net/img.png" and the imagehoster is using a redirect so the browser loads it via HTTP "http://imagehoster.net/img.png". This makes the browser showing annoying mixed content warnings. Is there a way to prevent this?
Short: NO
Long:
the have no really web server listening to ssl.
in fact, there is only a firewall/proxy which sends a http locate to the browser.
You can't intercept that request. even if you could, where to redirect to?
they don't provide a ssl server, because it takes to much resources for encryption or it takes to much traffic, because proxy#s can't cache.
An idea to solve that problem:
detect those links, download them and store a copy on your server.
replace the link. maybe you need only to store a preview. if the click on it redirect to the original link on a new browser window.
We have a secure website (SSL) in which we want to make calls to google's map server. The map server is http not https and every time there is a refresh of this screen (every minute for us) IE pops up its annoying mixed content message (trying to view a site with secure and non-secure info).
What I am looking for is a way around this. For example, is there a way to proxy the request so that our internal request is https but the other side of the proxy is not secure? I'm trying essentially to spoof the data to trick the browser.
Any ideas here? The actual security of the end point is less important than avoiding the error message itself.
Thanks!
Don
There is a way to suppress this at browser level, which might not be desirable for you, but I thought I'd throw it out there. In IE, Tools | Internet Options | Security | Internet Zone | Custom dialog box, you can set the "Display mixed content" to Enable. It's probably on prompt right now. Again, this is a single user browser level setting, so probably will not work for you. This does open up a lot of problems security wise though, and most admins will not do this (DNS poisoning, m-i-m etc).
Your second option is to become a premier customer: http://code.google.com/apis/maps/faq.html#ssl
Your third option is to use Virtual Earth - which supports native SSL w/o any strings
EDIT see similar question: here
As of March 2011, the Google Maps API is available to everyone over SSL:
http://googlegeodevelopers.blogspot.com/2011/03/maps-apis-over-ssl-now-available-to-all.html
Here's the problem with that. Even though the API is SSL the thumbnail images the map has for locations are NOT ssl. So you can still get a message.
remove runat="server" from head, where you are using code to link API to your page