I've been searching online, and specifically on IBM's site, but have not yet found where WebSphere configures its setting for regular/periodic syncs with its LDAP repository. I've heard that on some regular/periodic default interval (30 mins maybe?), WebSphere will update itself with the data in its LDAP repository. This is to account for updates being made to the LDAP repository outside of WebSphere's control. Anyone know where in WebSphere this interval setting is configured?
By default WAS will sync with LDAP and update the user and group information into cache.
This cache timeout was set to 10 mins by default. When the cache timeout expires, all cached information not accessed within the timeout period is purged from the cache.
Again WAS will sync with LDAP server and fetch information and stored in cache.
Global security > Authentication cache settings
There is no such setting for LDAP synchronization with WebSphere. Explains why I couldn't find it.
I had similar problem. It seems that WAS keeps search results for some time in cache. Solution is to setup low LDAP cache timeouts. Navigate here: Global security > Federated repositories > LDAP1 > Performance
Related
Is there any integration for Let's Encrypt in OpenShift (or, is this planned)? Let's encrypt are going to issue certs that expire in 90 days[1] -- and a big part of their plan is to have automation setups via people who use their certs so that they're always updated with new certs. Given this, some integration from OpenShift would be necessary.
Thanks,
[1] https://letsencrypt.org/2015/11/09/why-90-days.html
Currently, the ability to automate ssl certificate renewals and installation on OpenShift Online is not possible because the ssl certificates are stored at the node level, and ssl connections are terminated by the node level proxy (Reference this). If you would like to see it included in future versions, you should vote here and get people to vote on it. It's possible that you could automate it locally somewhat (or build a module to do it) by using the OpenShift Online API. Another suggestion would be to get a free ssl certificate from StartSSL that lasts for a year and install it either using the command line, or the web console.
I am new to GCE. I was able to create new instance using gcutil tool and GCE console. There are few questions unclear to me and need help:
1) Does GCE provides persistent disk when a new instance is created? I think its 10GB by default, not sure though. What is the right way to stop the instance without loosing data saved on it and what will be the charge (US zone) if say I need 20GB of disk space for that?
2) If I need SSL to enable HTTPS, is there any extra step I should do? I think I will need to add firewall as per the gcutil addfirewall command and create certificate (or install it from third part) ?
1) Persistent disk is definitely the way to go if you want a root drive on which data retention is independent of the life cycle of any virtual machine. When you create a Compute Engine instance via the Google Cloud Console, the “Boot Source” pull-down menu presents the following options for your boot device:
New persistent disk from image
New persistent disk from snapshot
Existing persistent disk
Scratch disk from image (not recommended)
The default option is the first one ("New persistent disk from image"), which creates a new 10 GB PD, named after your instance name with a 'boot-' prefix. You could also separately create a persistent disk and then select the "Existing persistent disk" option (along with the name of your existing disk) to use an existing PD as a boot device. In that case, your PD needs to have been pre-loaded with an image.
Re: your question about cost of a 20 GB PD, here are the PD pricing details.
Read more about Compute Engine persistent disks.
2) You can serve SSL/HTTPS traffic from a GCE instance. As you noted, you'll need to configure a firewall to allow your incoming SSL traffic (typically port 443) and you'll need to configure https service on your web server and install your desired certificate(s).
Read more about Compute Engine networking and firewalls.
As alternative approach i would suggest deploying VMs using Bitnami. There are many stacks you can choose from. This will save you time when deploying the VM. I would suggest you go with the SSD disks, as the pricing is close between magnetic disks and SSDs, but the performance boost is huge.
As for serving the content over SSL, you need to figure out how will the requests be processed. You can use NGINX or Apache servers. In this case you would need to configure the virtual hosts for default ports - 80 for non-encrypted and 443 for SSL traffic.
The easiest way to serve SSL traffic from your VM is generate SSL certificates using the Letsencrypt service.
I'm using Apache HTTPClient (4.2.2) / Java7 to open many reusable connections to a tomcat 7 server (to simulate many users repeatedly hitting the service). Both client and server on Ubuntu 12 (but different machines). I made sure that systctl.conf and limits.conf allow this scenario.
This works well up to about 1500 simulated users / connections. The connections get reused as expected. Somewhere between 1500 and 1600 simulated users however, connections are no longer reused and closed/ re-opend all the time. Why might this be the case?
I don't think the problem is on the server side as when I start multiple simulation clients on different machines against the same server, the server has no problems reusing the connections as long as each client doesn't go beyond 1500 connections.
There can be various reasons as to why connections are not longer being re-used depending on the configuration of the connection manager OR server side configuration. The easiest way to find out the reason is to run HttpClient with context logging on as described in the 'context logging for connection management / request execution' example in the Logging Guide
You might need to increase the number of available workers,at least check if there are workers free when you run out of connections by going to server-status
I have different sites running with 4 to 5 server at each location. All the locations have one monitoring server with Nagios. Now I want to create a central location and want to combine all the nagios services running at each location. Can anyone please point me to some documentation for these type of jobs.
There are two approaches that you can take.
Install a new Nagios core as you did at each location and perform active checks on each of the remote hosts. You'll likely end up installing NRPE on each of the remote hosts at each location and can read this document for the details: http://nagios.sourceforge.net/docs/nrpe/NRPE.pdf. If your remote servers are Windows servers, you can use NSClient to much of the same things that NRPE does for Linux hosts. This effectively centralizes your monitoring server. I also wrote some how-to style entries for using NRPE to run privileged commands http://blog.gnucom.cc/?p=479 or to run event handlers http://blog.gnucom.cc/?p=458. If you get tired of installing NRPE, you can use my script here http://blog.gnucom.cc/?p=185. I also have instructions to install NSClient here http://blog.gnucom.cc/?p=201.
Install a new Nagios core as you did at each location and perform passive checks by instructing the remote Nagios cores to feed their results to the new central Nagios core's passive command file. I haven't done this myself, so I'm going to point you to the communities documentation here http://nagios.sourceforge.net/docs/2_0/passivechecks.html. You could probably look at my event handler post to set up event handlers that send checks to the main server.
From my personal experience, the first option I mentioned is easier to implement, and is far easy to administer. However, as your server fleet grows you'll start seeing major CPU bottlenecks with the main Nagios core. This is where passive checks would become beneficial, as the main Nagios core simply waits for critical checks to be sent to it rather than having to check them itself.
Hope this helps. :)
A centralized view tool may be what you are looking for. There are a number of different options available.
Nagiosfusion
MK Livestatus
Nagcen
Thruk
Is there any solution for running these kind of operations on DreamHost or other shared hosting environments where I don't have access to tweak apache?
You certainly can, but as long as Apache HTTP server doesn't provide non-blocking IO capabilities (and each polling connection has a server thread associated to it), you'll be running out of memory very fast (after 2-3k connections).
If you meant Apache Tomcat, NIO is turned off by default, and you need to have access to configuration files in order to change this.