I have been curious about better ways to cross browser test than those screenshot services or maintaining my own array of VMs to VNC into. Then today I found crossbrowsertesting.com (which seems to allow you to connect from your browser via VNC to one of their machines running virtually any browser). This is really similar to a solution I had been thinking about, but veered away for a few reasons. I have two questions about this service:
if you have used the service, what are its pros/cons?
how do they get around people doing all kinds of nasty things on their VMs, since they give you a full desktop to play around in.
Bonus: how do they get around the legal issues regarding people VNC'ing into Windows and using IE when the connecting clients clearly do not own the software?
Have not used it sorry, but the standard con for a remote service is that your test site has to be accesible to the web.
You secure the desktops with the tools you get with Windows Server, the ability to lock down a user has been around for a while, although it still needs work.
Bonus: You can licence Terminal Services for multiple users, we frequently use it on our "management" servers that allow all the technical staff to log onto one server in the environment and then connect from there to the production servers. We licence it for everyone to log into at once.
Apart from not being able to access local, i.e. on company's intranet or your hard drive, web sites crossbrowsertesting.com may also have a response time issues. VNC is not a very efficient protocol and working using VNC can be a pain.
I prefer tool that allow me to install all relevant browsers on my PC, such as BrowserSeal
Related
I want to make an HTML5 game that can be controlled with mobile. So how to communicate between mobile and browser on PC?
I think of the following ways:
Bluetooth This may be the most easy way to use. But I searched and found that Chrome made bluetooth API proposal last year but is now available only on Chrome dev, which means I cannot make the game popular for everyone.
WiFi I don't know how to set up a host on browser using WiFi information. If so, I can then connect my mobile to WiFi. This is considered to be faster than web socket since it's local network.
Web Socket There're a lot of information about how to use this. But as this use WAN, it is considered to be slower and is my last choice.
So, does anyone know how to achieve this with the former 2 ways?
So I have been thinking about building quite a complex application. The idea of building an html5 version has become quite an attractive possibility. I have a few questions about it first however.
My first concern is how reliable the offline application API's are at the moment. I have been looking into this standard: http://www.whatwg.org/specs/web-apps/current-work/multipage/offline.html and it looks pretty easy to implement and use, but I am wondering how easy it is to use? And assuming you set up the manifest etc, is the web application just accessed (offline) by going to the same url you originally downloaded the application from?
My other concern is the use of sockets. This offline application still needs to be able to communicate with local servers, I ideally wanted to avoid having to host a web-server, a socket connection however would be plausible. How well do websockets currently work when the browser is offline? Is it possible, to have a fully networked / interactive browser application running even without an active internet connection? (after the app is first downloaded)
Any insight would be great!
That's a lot of questions, you may want to consider breaking it up into more easily answerable portions more directly related to what, exactly, you're trying to achieve. In the meantime I'll try to provide a short answer to each of your questions:
My first concern is how reliable the offline application API's are at
the moment.
Fairly reliable, they have been implemented for a number of versions across most major web browsers (except IE).
is the web application just accessed (offline) by going to the same
url you originally downloaded the application from?
Yes. Once the offline app has been cached, the application is served from that cache. No network requests will be made unless you explicitly request URLs from the NETWORK or FALLBACK sections of the manifest or aren't covered by the manifest at all, apart from to check whether the manifest itself has changed.
This offline application still needs to be able to communicate with
local servers, I ideally wanted to avoid having to host a web-server,
a socket connection however would be plausible.
A Web Socket still requires a web server. The initial handshake for a Web Socket is over HTTP. A Web Socket is not the same thing as a socket in TCP/IP.
How well do websockets currently work when the browser is offline?
They won't work at all, when you've set a browser to offline mode it won't make any network requests at all. Note that a browser being set to offline is not the same thing as the offline in 'offline API'. The offline API is primarily concerned with whether or not the server hosting the application can be reached, not whether the the browser is currently connected to a network or whether that network is connected to the internet. If the server goes down then the app is just as 'offline' as if the network cable on the user's computer got unplugged. Have a read through this blog post, in particular the comments. My usual approach to detecting offline status is to set up a pair of files in the FALLBACK section such that you get one when online and the other when offline - request that file with AJAX and see what you get.
Is it possible, to have a fully networked / interactive browser
application running even without an active internet connection?
Yes, but I don't think that means what you think it does. Separate instances of the app running on different browsers on different machines would not be able to communicate with each other without going via the web server. However, there's no requirement that the web server be 'on the internet', it will do just fine sitting on the local network.
I'm developing on super fast fibre optic connection.
I want a tool that allows me to test web sites at certain preset speeds for example - I want to feel the experience of my site loading at modem speeds, then perhaps 1mbps, 2mbps, etc.
Basically I want to be able to set the speed of the connection so that I get the real feel of the site loading remotely from other countries and connections.
Anyone know of such a tool?
WANEM is a nice open source solution that can simulate Network delay, Packet loss, Packet corruption, Disconnections, Packet re-ordering, Jitter, etc.
It also supports a mode of operation that only uses one network-interface, which makes it super quick to set up a test environment.
EDIT
Although WANEM is a Linux application, you only need to burn the bootable CD and start a machine with that CD, no need to sacrifice a machine to run WANEM. If even that's not an option you can also download it as a virtual appliance that runs in a VMWare Workstation ($$), VMWare Player (free) or VMWare Server (free).
However, in my opinion(based on real usage of such products) it's really easier to have the "network simulator" on a separate machine instead of loading it on either the server or the client under test. And as explained above, thanks to the bootable cd option that can be any machine you have lying around - we typically use decommissioned desktops and notebooks for this purpose.
there are a lot of tools outside like:
http://www.netlimiter.com/
http://www.antamediabandwidth.com/
...
basically the most of them work likes proxies
I really know nothing about securing or configuring a "live" internet facing web server and that's exactly what I have been assigned to do by management. Aside from the operating system being installed (and windows update), I haven't done a thing. I have read some guides from Microsoft and on the web, but none of them seem to be very comprehensive/ up to date. Google has failed me.
We will be deploying a MVC ASP.NET site.
What is your personal check when you are getting ready to deploy a application on a new windows server?
This is all we do:
Make sure Windows Firewall is enabled. It has an "off by default" policy, so the out of box rule setup is fairly safe. But it never hurts to turn additional rules off, if you know you're never going to need them. We disable almost everything except for HTTP on the public internet interface, but we like Ping (who doesn't love Ping?) so we enable it manually, like so:
netsh firewall set icmpsetting 8
Disable the Administrator account. Once you're set up and going, give your own named account admin rights. Disabling the default Administrator account helps reduce the chance (however slight) of someone hacking it. (The other common default account, Guest, is already disabled by default.)
Avoid running services under accounts with administrator rights. Most reputable software is pretty good about this nowadays, but it never hurts to check. For example, in our original server setup the Cruise Control service had admin rights. When we rebuilt on the new servers, we used a regular account. It's a bit more work (you have to grant just the rights necessary to do the work, instead of everything at once) but much more secure.
I had to lockdown one a few years ago...
As a sysadmin, get involved with the devs early in the project.. testing, deployment and operation and maintenance of web apps are part of the SDLC.
These guidelines apply in general to any DMZ host, whatever OS linux or windows.
there are a few books deicated to IIS7 admin and hardening but It boils down to
decide on your firewall architecture and configuration and review for appropriateness. remember to defend your server against internal scanning from infected hosts.
depending on the level of risk consider a transparent Application Layer gateway to clean the traffic and make the webserver easier to monitor.
1, you treat the system as a bastion host. locking down the OS, reducing the attack surface(services, ports installed apps ie NO interactive users or mixed workloads, configure firewalls RPC to respond only to specified management DMZ or internal hosts).
consider ssh, OOB and/or management LAN access and host IDS verifiers like AIDE tripwire or osiris.
if the webserver is sensitive, consider using argus to monitor and record traffic patterns in addition to IIS/FW logs.
baseline the system configuration and then regularly audit against the base line, minimizing or controlling changes to keep this accurate. automate it. powershell is your friend here.
the US NIST maintain a national checklist program repository. NIST, NSA and CIS have OS and webserver checklists worth investigating even though they are for earlier versions. look at the apache checklists as well for configuration suggestions. review the addison wesley and OReilly apache security books to get a grasp of the issues.
http://checklists.nist.gov/ncp.cfm?prod_category://checklists.nist.gov/ncp.cfm?prod_category
http://www.nsa.gov/ia/guidance/security_configuration_guides/web_server_and_browser_guides.shtml
www.cisecurity.org offer checklists and benchmarking tools for subscribers. aim for a 7 or 8 at a minimum.
Learn from other's mistakes (and share your own if you make them):
Inventory your public facing application products and monitor them in NIST's NVD(vulerability database..) (they aggregate CERT and OVAL as well)
subscribe and read microsoft.public.iinetserver.iis.security and microsoft security alerts. (NIST NVD already watches CERT)
Michael Howard is MS's code security guru, read his blog (and make sure your dev's read it too) it's at: http://blogs.msdn.com/michael_howard/default.aspx
http://blogs.iis.net/ is the IIS teams blog. as a side note if you're a windows guy, always read the team blog for MS product groups you work with.
David Litchfield has written several books on DB and web app hardening. he is a man to listen to. read his blog.
If your dev's need a gentle introduction to (or reminder about) web security and sysadmins too! I recommend "Innocent code" by Sverre Huseby.. havent enjoyed a security book like that since a cookoo's egg. It lays down useful rules and principles and explains things from the ground up. Its a great strong accessible read
have you baselined and audited again yet? ( you make a change you make a new baseline).
Remember, IIS is a meta service (FTP.SMTP and other services run under it). make your life easier and run a service at a time on one box. backup your IIS metabase.
If you install app servers like tomcat or jboss on the same box ensure that they are secured and locked down too..
secure web management consoles to these applications, IIS included.
IF you have to have DB on the box too. this post can be leveraged in a similar way
logging.an unwatched public facing server (be it http, imap smtp) is a professional failure. check your logs pump them into an RDMS and look for the quick the slow and the the pesky. Almost invariably your threats will be automated and boneheaded. stop them at the firewall level where you can.
with permission, scan and fingerprint your box using P0f and nikto. Test the app with selenium.
ensure webserver errors are handled discreetly and in a controlled manner by IIS AND any applications. , setup error documents for 3xx, 4xx and 5xx response codes.
now you've done all that, you've covered your butt and you can look at application/website vulnerabilities.
be gentle with the developers, most only worry about this after a breach and reputation/trust damage is done. the horse has bolted and is long gone. address this now. its cheaper. Talk to your dev's about threat trees.
Consider your response to Dos and DDoS attacks.
on the plus side consider GOOD traffic/slashdotting and capacity issues.
Liase with the Dev's and Marketing to handle capacity issues and server/bandwidth provisioning in response to campaigns/sales new services. Ask them what sort of campaign response theyre expec(or reminting.
Plan ahead with sufficient lead time to allow provisioning. make friends with your network guys to discuss bandwidth provisioing at short notice.
Unavailabilty due to misconfiguration poor performance or under provisioning is also an issue.. monitor the system for performance, disk, ram http and db requests. know the metrics of normal and expected performance.. (please God, is there an apachetop for IIS? ;) ) plan for appropriate capacity.
During all this you may ask yourself: "am I too paranoid?". Wrong question.. it's "am I paranoid enough?" Remember and accept that you will always be behind the security curve and that this list might seem exhaustive, it is but a beginning. all of the above is prudent and diligent and should in no way be considered excessive.
Webservers getting hacked are a bit like wildfires (or bushfires here) you can prepare and it'll take care of almost everything, except the blue moon event. plan for how you'll monitor and respond to defacement etc.
avoid being a security curmudgeon or a security dalek/chicken little. work quietly and and work with your stakeholders and project colleagues. security is a process, not an event and keeping them in the loop and gently educating people is the best way to get incremental payoffs in term of security improvements and acceptance of what you need to do. Avoid being condescending but remember, if you DO have to draw a line in the sand, pick your battles, you only get to do it a few times.
profit!
Your biggest problem will likely be application security. Don't believe the developer when he tells you the app pool identity needs to be a member of the local administrator's group. This is a subtle twist on the 'don't run services as admin' tip above.
Two other notable items:
1) Make sure you have a way to backup this system (and periodically, test said backups).
2) Make sure you have a way to patch this system and ideally, test those patches before rolling them into production. Try not to depend upon your own good memory. I'd rather have you set the box to use windowsupdate than to have it disabled, though.
Good luck. The firewall tip is invaluable; leave it enabled and only allow tcp/80 and tcp/3389 inbound.
use the roles accordingly, the less privileges you use for your services accounts the better,
try not to run all as an administrator,
If you are trying to secure a web application, you should keep current with information on OWASP. Here's a blurb;
The Open Web Application Security
Project (OWASP) is a 501c3
not-for-profit worldwide charitable
organization focused on improving the
security of application software. Our
mission is to make application
security visible, so that people and
organizations can make informed
decisions about true application
security risks. Everyone is free to
participate in OWASP and all of our
materials are available under a free
and open software license. You'll
find everything about OWASP here on
our wiki and current information on
our OWASP Blog. Please feel free to
make changes and improve our site.
There are hundreds of people around
the globe who review the changes to
the site to help ensure quality. If
you're new, you may want to check out
our getting started page. Questions or
comments should be sent to one of our
many mailing lists. If you like what
you see here and want to support our
efforts, please consider becoming a
member.
For your deployment (server configuration, roles, etc...), their have been a lot of good suggestions, especially from Bob and Jeff. For some time attackers have been using backdoor's and trojans that are entirely memory based. We've recently developed a new type of security product which validate's server memory (using similar techniques to how Tripwire(see Bob's answer) validates files).
It's called BlockWatch, primarily designed for use in cloud/hypervisor/VM type deployments but can also validate physical memory if you can extract them.
For instance, you can use BlockWatch to verify your kernel and process address space code sections are what you expect (the legitimate files you installed to your disk).
Block incoming ports 135, 137, 138, 139, 445 with a firewall. The builtin one will do. Windows server 2008 is the first one for which using RDP directly is as secure as ssh.
First off if you're unaware, samba or smb == Windows file sharing, \\computer\share etc.
I have a bunch of different files on a bunch of different computers. It's mostly media and there is quite a bit of it. I'm looking into various ways of consolidating this into something more manageable.
Currently there are a few options I'm looking at, the most insane of which is some kind of samba share indexer that would generate a list of things shared on the various samba servers I tell it about and upload them to a website which could then be searched and browsed.
It's a cheap solution, OK?
Ignoring the fact that the idea is obviously a couple of methods short of a class, do you chaps know of any way to link to samba file shares in html in a cross-browser way? In windows one does \\computer\share, in linux one does smb://computer/share, neither of which work afaik from browsers that aren't also used as file managers (e.g. any browser that isn't Internet Explorer).
Some Clarifications
The computers used to access this website are a mixture of WIndows (XP) and Linux (Ubuntu) with a mixture of browsers (Opera and Firefox).
In linux entering smb://computer/share only seems to work in Nautilus (and presumably Konqueror / Dolphin for you KDE3.5/4 people). It doesn't work in Firefox or Opera (Firefox does nothing, Opera complains the URL is invalid).
I don't have a Windows box handy atm so I'm unsure if \\computer\share works in anything apart from IE (e.g. Firefox / Opera).
If you have a better idea for consolidating a bunch of random samba shares (it certainly can't get much worse than mine ;-)) it's worth knowing that there is no guarantee that any of the servers I would be wanting to index / consolidate would be up at any particular moment. Moreover, I wouldn't want the knowledge of what they have shared lost or hidden just because they weren't available. I would want to know that they share 'foo' but they are currently down.
Hmm, protocol handlers look interesting.
As Mark said, in Windows protocol handlers can be dealt with at the OS level
Protocol handlers can also be done at the browser level (which is preferred, as it is cross platform and doesn't involve installing anything).
Summary of how it works in Firefox
Summary of how it works in Opera
I'd probably just setup Apache on the SAMBA servers and let it serve the files via HTTP. That'd give you a nice autoindex default page too, and you could just wget and concatenate each index for your master list.
A couple of other thoughts:
file://server/share/file is the defacto Windows way of doing it
You can register protocol handlers in Windows, so you could register smb and redirect it to file://. I'd suspect GNOME/KDE/etc. would offer the same.
To make the links work cross platform you could look at the User Agent either in a CGI script or in JavaScript and update your URLs appropriately.
Alternatively, if you want to consolidate SMB shares you could try using Microsoft DFS (which also works with Samba).
You set up a DFS root and tell it about all the other SMB/Samba shares you have in your environment. Clients then connect to the root and see all the shares as if they were hosted on that single root machine; the root silently redirects clients to the correct system when they open a share.
Think of it as like symbolic links or a virtual file system for SMB.
It would solve your browsing problem. I'm not sure if it would solve your searching one.