Is it good idea to show/hide React component using window.env
for example we have feature which we are not ready to release yet,so we are thinking of hiding it using window.env.FEATURE_ENABLED=0 (these vars will be picked by api call to service that serves bundle to browser)
But,I am thinking its risky since user can look at windows.env and set window.env.FEATURE_ENABLED=1 and start seeing the workflow which we intend to hide.
Could anyone please provide their take on this.
Yes, it could potentially be risky for the reason you say.
A better approach would be to only include finished features in the production build - unfinished features that are still in testing should not be sent to the client. For such features, have a separate build. Host it:
On a local dev server (usually one running on the developer's personal machine) (great when one is making rapid changes), or
On a staging server - one that's accessible to all developers, and works similarly to the live site, but isn't the same as the production URL
A staging server is the professional approach when multiple devs need access to it at once. It can take some work at first to integrate it into your build process, but it's worth it for larger projects.
I read "Those who would like to enjoy the binding, presentation model structuring, testing capabilities, toolkit independence, and all the other benefits of OpenDolphin, but prefer REST (or other) remoting for data access, can use OpenDolphin with the in-memory configuration"
But I could not find any further hints in the docs?
I can't rely on sticky sessions in my load balanced webserver.
Therefore I need to plugin something different for the http session state.
Is there a opendolphin config property prepared for this? If not are there any plugin points available?
since OpenDolphin and Dolphin Platform use the remote presentation model pattern to synchronize presentation models between client and server you need a state on the server. Currently this state is defined in the session. As you said it's no problem to use load balancing with sticky sessions to provide several server instances. If you need dynamic updates between the clients a distributed event bus like hazelcast will help.
Therefore I need to plugin something different for the http session
state.
What do you need? With the last version (0.8.6) of Dolphin Platform you can access the http client in the client API and provide custom headers or cookies. Will this help? Can you please tell us what you need or open an issue at the Dolphin Platform github repo?
I'm playing around in .netcore and attempting to make use of the user secret store, some details are here: https://docs.asp.net/en/latest/security/app-secrets.html
I'm getting along with it well enough when working locally, but I'm having trouble understanding how this could be utilized effectively in a team environment, and if I wanted to work on this project from more than one computer.
The store itself (at least by default) keeps its configuration json file within the users/appdata (on windows). This feature is good to use if you're uploading the project to github, to hide your API keys, connection strings etc. This is all great when it's just me, on one machine working on a project. But how does this work when working in a team environment, or on multiple machines? The only thing I can think of is to find the configuration file, check it into a private repo, and make sure to replace it in the correct directory when changes occur.
Is there another way to manage this that I'm not aware of?
As you already know, the Secret Manager tool is providing another method to avoid checking sensitive data into source control by adding this layer of control.
So, where should we store sensitive configuration instead? The location should obviously be separate from your source code and, more importantly, secure. It could be in a separate private repository, protected fileshare, document management system, etc.
Rather than finding and sharing the exact configuration file, however, I would suggest keeping a script (e.g. .bat file) that you would run on each machine to set your secrets. For example:
dotnet user-secrets set MySecret1 ValueOfMySecret1 --project c:\work\WebApp1
dotnet user-secrets set MySecret2 ValueOfMySecret2 --project c:\work\WebApp1
This would be more portable between machines and avoid the hassle of knowing where to find and copy the config files themselves.
Also, for these settings, consider whether you need them to be the same across all developers in your team. For local development, I would normally want to have control to install, use, and name resources differently than others in my team. Of course, this depends on your situation and preferences, and I see reasons to share them too.
I am building a Spring Web Application hosted on Elastic Beanstalk. I use S3 to store user uploaded images which works great. What I don't understand is how fetching images from S3 to the client work. I found three alternatives.
1.Get the image in a controller and send it to the client. Like this:
S3Object object = amazonS3Client.getObject("bucketname", "path/to/image");
2.Open up all images and reach it directly by an URL in the client. Something like this:
<img src="http://aws.amazon.com/bucket/path/to/image.jpg">
3.Use signed download URLs that only working for a certain time. Like this:
GeneratePresignedUrlRequest request = new GeneratePresignedUrlRequest("bucketname", "path/to/image");
String url = conn.generatePresignedUrl(request)
Im not sure which approach to go for. Routing it through the web server seems unnecessary, since it loads the server. Open the URLs to anyone might higher requests and costs since anyone can use the images. And the third way is new to me, haven't really seen anyone practising this which makes me insecure if this is really the way to go.
So, how is this usually done?
And how is this used in the development environment versus production environment. I guess its not changing? Or is it common to use spring profiles to change the location of static content while developing and only use S3 for production?
If your hosting Javascript, CSS on S3, is it then most common to go for approach 2 and open them up for everyone?
For me it depends upon the requirements you have for access control for images uploaded by a user.
If the images are non-sensitive i.e. it wouldn't really matter if someone else got hold of another user's images, then I would go for approach 2.
If on the other hand it would be a disaster if someone managed to get hold of another user's images, then I would go for approach 3 (or some other form of expiring token access to the images).
The last time I did this I went for approach 2 because the images were non-sensitive. To try and prevent people from discovering images, we did apply a hashing function to the name of the image, but again I wasn't massively concerned about this. In either case, a well defined bucket structure that can be easily worked out by the application when constructing the URL for an image is useful. So for you, perhaps consider something like:
s3:bucket_name/images/users/<hashed_and_salted_user_name>/<user_images>
As for you request regarding dev vs prod environments, then matching a bucket name to the Spring profile is the approach we used. So for example:
s3:bucket_name/prod/images/users/user/foo.jpg
s3:bucket_name/dev/images/users/user/foo.jpg
As you can probably guess we had Spring profiles named "prod" and "dev". The code for building image URLs took into account the name of the current Spring profile when creating the URL. Gives a nice separation between environments.
In terms of CSS and Javascript, then I tend to host obfuscated/minified versions in the production S3 buckets, and full versions in the dev/test buckets (mainly for performance rather that trying to hide code). In addition I'd use some sort versioning/naming structure in how you host CSS/Javascript in S3 so that you can determine what "version" of resources your app is using. So for example:
s3:bucket_name/css/app-1.css
s3:bucket_name/css/app-2.css
The version of the CSS/Javascript resources is updated each time you push a new version into production.
By going down this path you kinda look at S3 as the final resting place for a piece of Javascript/CSS when it is ready to go into the wide world of production. Once there, you know it will never change. If CSS/Javascript does change, then the user has to fetch a new resource from S3 as the version will be incremented. You can hook this into your build process so that your main app is always referencing the latest version of CSS/Javascript. I found this has two useful functions:
Makes it very easy to determine which version of a resource your application is running with
Makes it very easy to cache resources (either with browser or something like CloudFront) as you know they will never change
Hope that helps.
I really know nothing about securing or configuring a "live" internet facing web server and that's exactly what I have been assigned to do by management. Aside from the operating system being installed (and windows update), I haven't done a thing. I have read some guides from Microsoft and on the web, but none of them seem to be very comprehensive/ up to date. Google has failed me.
We will be deploying a MVC ASP.NET site.
What is your personal check when you are getting ready to deploy a application on a new windows server?
This is all we do:
Make sure Windows Firewall is enabled. It has an "off by default" policy, so the out of box rule setup is fairly safe. But it never hurts to turn additional rules off, if you know you're never going to need them. We disable almost everything except for HTTP on the public internet interface, but we like Ping (who doesn't love Ping?) so we enable it manually, like so:
netsh firewall set icmpsetting 8
Disable the Administrator account. Once you're set up and going, give your own named account admin rights. Disabling the default Administrator account helps reduce the chance (however slight) of someone hacking it. (The other common default account, Guest, is already disabled by default.)
Avoid running services under accounts with administrator rights. Most reputable software is pretty good about this nowadays, but it never hurts to check. For example, in our original server setup the Cruise Control service had admin rights. When we rebuilt on the new servers, we used a regular account. It's a bit more work (you have to grant just the rights necessary to do the work, instead of everything at once) but much more secure.
I had to lockdown one a few years ago...
As a sysadmin, get involved with the devs early in the project.. testing, deployment and operation and maintenance of web apps are part of the SDLC.
These guidelines apply in general to any DMZ host, whatever OS linux or windows.
there are a few books deicated to IIS7 admin and hardening but It boils down to
decide on your firewall architecture and configuration and review for appropriateness. remember to defend your server against internal scanning from infected hosts.
depending on the level of risk consider a transparent Application Layer gateway to clean the traffic and make the webserver easier to monitor.
1, you treat the system as a bastion host. locking down the OS, reducing the attack surface(services, ports installed apps ie NO interactive users or mixed workloads, configure firewalls RPC to respond only to specified management DMZ or internal hosts).
consider ssh, OOB and/or management LAN access and host IDS verifiers like AIDE tripwire or osiris.
if the webserver is sensitive, consider using argus to monitor and record traffic patterns in addition to IIS/FW logs.
baseline the system configuration and then regularly audit against the base line, minimizing or controlling changes to keep this accurate. automate it. powershell is your friend here.
the US NIST maintain a national checklist program repository. NIST, NSA and CIS have OS and webserver checklists worth investigating even though they are for earlier versions. look at the apache checklists as well for configuration suggestions. review the addison wesley and OReilly apache security books to get a grasp of the issues.
http://checklists.nist.gov/ncp.cfm?prod_category://checklists.nist.gov/ncp.cfm?prod_category
http://www.nsa.gov/ia/guidance/security_configuration_guides/web_server_and_browser_guides.shtml
www.cisecurity.org offer checklists and benchmarking tools for subscribers. aim for a 7 or 8 at a minimum.
Learn from other's mistakes (and share your own if you make them):
Inventory your public facing application products and monitor them in NIST's NVD(vulerability database..) (they aggregate CERT and OVAL as well)
subscribe and read microsoft.public.iinetserver.iis.security and microsoft security alerts. (NIST NVD already watches CERT)
Michael Howard is MS's code security guru, read his blog (and make sure your dev's read it too) it's at: http://blogs.msdn.com/michael_howard/default.aspx
http://blogs.iis.net/ is the IIS teams blog. as a side note if you're a windows guy, always read the team blog for MS product groups you work with.
David Litchfield has written several books on DB and web app hardening. he is a man to listen to. read his blog.
If your dev's need a gentle introduction to (or reminder about) web security and sysadmins too! I recommend "Innocent code" by Sverre Huseby.. havent enjoyed a security book like that since a cookoo's egg. It lays down useful rules and principles and explains things from the ground up. Its a great strong accessible read
have you baselined and audited again yet? ( you make a change you make a new baseline).
Remember, IIS is a meta service (FTP.SMTP and other services run under it). make your life easier and run a service at a time on one box. backup your IIS metabase.
If you install app servers like tomcat or jboss on the same box ensure that they are secured and locked down too..
secure web management consoles to these applications, IIS included.
IF you have to have DB on the box too. this post can be leveraged in a similar way
logging.an unwatched public facing server (be it http, imap smtp) is a professional failure. check your logs pump them into an RDMS and look for the quick the slow and the the pesky. Almost invariably your threats will be automated and boneheaded. stop them at the firewall level where you can.
with permission, scan and fingerprint your box using P0f and nikto. Test the app with selenium.
ensure webserver errors are handled discreetly and in a controlled manner by IIS AND any applications. , setup error documents for 3xx, 4xx and 5xx response codes.
now you've done all that, you've covered your butt and you can look at application/website vulnerabilities.
be gentle with the developers, most only worry about this after a breach and reputation/trust damage is done. the horse has bolted and is long gone. address this now. its cheaper. Talk to your dev's about threat trees.
Consider your response to Dos and DDoS attacks.
on the plus side consider GOOD traffic/slashdotting and capacity issues.
Liase with the Dev's and Marketing to handle capacity issues and server/bandwidth provisioning in response to campaigns/sales new services. Ask them what sort of campaign response theyre expec(or reminting.
Plan ahead with sufficient lead time to allow provisioning. make friends with your network guys to discuss bandwidth provisioing at short notice.
Unavailabilty due to misconfiguration poor performance or under provisioning is also an issue.. monitor the system for performance, disk, ram http and db requests. know the metrics of normal and expected performance.. (please God, is there an apachetop for IIS? ;) ) plan for appropriate capacity.
During all this you may ask yourself: "am I too paranoid?". Wrong question.. it's "am I paranoid enough?" Remember and accept that you will always be behind the security curve and that this list might seem exhaustive, it is but a beginning. all of the above is prudent and diligent and should in no way be considered excessive.
Webservers getting hacked are a bit like wildfires (or bushfires here) you can prepare and it'll take care of almost everything, except the blue moon event. plan for how you'll monitor and respond to defacement etc.
avoid being a security curmudgeon or a security dalek/chicken little. work quietly and and work with your stakeholders and project colleagues. security is a process, not an event and keeping them in the loop and gently educating people is the best way to get incremental payoffs in term of security improvements and acceptance of what you need to do. Avoid being condescending but remember, if you DO have to draw a line in the sand, pick your battles, you only get to do it a few times.
profit!
Your biggest problem will likely be application security. Don't believe the developer when he tells you the app pool identity needs to be a member of the local administrator's group. This is a subtle twist on the 'don't run services as admin' tip above.
Two other notable items:
1) Make sure you have a way to backup this system (and periodically, test said backups).
2) Make sure you have a way to patch this system and ideally, test those patches before rolling them into production. Try not to depend upon your own good memory. I'd rather have you set the box to use windowsupdate than to have it disabled, though.
Good luck. The firewall tip is invaluable; leave it enabled and only allow tcp/80 and tcp/3389 inbound.
use the roles accordingly, the less privileges you use for your services accounts the better,
try not to run all as an administrator,
If you are trying to secure a web application, you should keep current with information on OWASP. Here's a blurb;
The Open Web Application Security
Project (OWASP) is a 501c3
not-for-profit worldwide charitable
organization focused on improving the
security of application software. Our
mission is to make application
security visible, so that people and
organizations can make informed
decisions about true application
security risks. Everyone is free to
participate in OWASP and all of our
materials are available under a free
and open software license. You'll
find everything about OWASP here on
our wiki and current information on
our OWASP Blog. Please feel free to
make changes and improve our site.
There are hundreds of people around
the globe who review the changes to
the site to help ensure quality. If
you're new, you may want to check out
our getting started page. Questions or
comments should be sent to one of our
many mailing lists. If you like what
you see here and want to support our
efforts, please consider becoming a
member.
For your deployment (server configuration, roles, etc...), their have been a lot of good suggestions, especially from Bob and Jeff. For some time attackers have been using backdoor's and trojans that are entirely memory based. We've recently developed a new type of security product which validate's server memory (using similar techniques to how Tripwire(see Bob's answer) validates files).
It's called BlockWatch, primarily designed for use in cloud/hypervisor/VM type deployments but can also validate physical memory if you can extract them.
For instance, you can use BlockWatch to verify your kernel and process address space code sections are what you expect (the legitimate files you installed to your disk).
Block incoming ports 135, 137, 138, 139, 445 with a firewall. The builtin one will do. Windows server 2008 is the first one for which using RDP directly is as secure as ssh.