I am facing a problem, which I am unable to understand how to solve. I am using Openshift Origin 3.7, and have three routes corresponding to 3 deployments as follows (dummy data)
1) App 1 ---> www.domain.edu/
2) App 2 ---> www.domain.edu/path1
3) App 3 ---> www.domain.edu/path2
Since all the domain was the same, while configuring the secure route, I provided the same Certificate and private key on all these routes, and it worked great, until recently, when the certificate is due for renewal. I generated a new certificate, deleted the old routes and created new, identical routes, with the new certificate details.
But when I access the routes on my browser, still the older certificate is displayed. Am I skipping some step, which needs to be done. When adding the certificate, I only uploaded the certificate and key on the create route window and nowhere else. Do I have to redeploy the entire service etc.
Help on this would be greatly appreciated.
Thanks
Solved this problem. I just provided the certificate in the base app, and things started working.
Solved this issue. I tried changing the Termination Type.
Previously the Termination Type was Edge. I changed it to Re-encrypt, added all the certificates then, reverted back to Edge. Somehow it worked. I don't know what was the main issue.
Related
Does using Chrome v.63 force use of https?
I am running Apache 2.4.27 on a Windows 10 desktop as a sandbox where I can experiment and do some tutorials. I have a virtual host setup called www.tutorial.dev with an alias of tutorial.dev. In the Windows 10 hosts file I have set up www.tutorial.dev and tutorial.dev to point to localhost.
As of yesterday the url http://tutorial.dev/Bootstrap4FromScratch/ was working normally. In this case providing a directory listing as a jump off point into various examples and exercises. Today, when I type in the url Chrome changes it to https and I get a connection refused message.
I understand the connection refused message. There are no certificates setup.
The only change I can find is that Chrome changed from v.62.x to v.63.x. What in Chrome 63 could be forcing http to https?
I don't have this problem with MS Edge. I tested another similar configuration on a different machine that was in the process of downloading Chrome 63.x. It already had 62.x installed. It worked until the 63.x upgrade was complete then the same problem occurred.
Additional information: If I use http://localhost to bring up the index.html or version.php in the htdocs directory the switch from http to https does not happen. The virtual host www.tutorial.dev resides in another directory outside of htdocs.
If this has been asked and answered please point me to the question/answer thread.
Thanks in advance,
Barry
Google owns the .dev TLD and with Chrome 63 they are forcing HTTPS on all requests to anything.dev
I went through my local dev setup and replaced all references to .dev with .local, works fine now.
Your other option is to use Firefox for local development. .dev now triggers https in FF (since before FF61) with a workaround
Edit (asside):
I have switched to using .localhost for dev as browsers allow navigator.geolocation.getCurrentPosition() (blocked if site is not HTTPS).
Google Chrome 63 update, out December 2017, places .dev domains in the preloaded HSTS list with a rule enforcing HTTPS, no workarounds.
{ "name": "dev", "include_subdomains": true, "mode": "force-https" }
The "only" way is to switch .dev with something else, like .localhost
IETF states a few reserved TLDs for development:
TLDs for Testing, & Documentation Examples
There is a need for top level domain (TLD) names that can be used
for creating names which, without fear of conflicts with current or
future actual TLD names in the global DNS, can be used for private
testing of existing DNS related code, examples in documentation, DNS
related experimentation, invalid DNS names, or other similar uses.
For example, without guidance, a site might set up some local
additional unused top level domains for testing of its local DNS code
and configuration. Later, these TLDs might come into actual use on
the global Internet. As a result, local attempts to reference the
real data in these zones could be thwarted by the local test
versions. Or test or example code might be written that accesses a
TLD that is in use with the thought that the test code would only be
run in a restricted testbed net or the example never actually run.
Later, the test code could escape from the testbed or the example be
actually coded and run on the Internet. Depending on the nature of
the test or example, it might be best for it to be referencing a TLD
permanently reserved for such purposes.
To safely satisfy these needs, four domain names are reserved as
listed and described below.
.test
.example
.invalid
.localhost
".test" is recommended for use in testing of current or new DNS
related code.
".example" is recommended for use in documentation or as examples.
".invalid" is intended for use in online construction of domain
names that are sure to be invalid and which it is obvious at a
glance are invalid.
The ".localhost" TLD has traditionally been statically defined in
host DNS implementations as having an A record pointing to the
loop back IP address and is reserved for such use. Any other use
would conflict with widely deployed code which assumes this use.
PS: .foo is also in the preloaded HSTS list
Thanks everyone for the advice. I ended up going with .tst for now. I have a feeling I'l be switching over (forced?) to .localhost at some point. But for now .tst is less typing.
I've written an intercepting proxy in Python 3 which uses a man-in-the-middle "attack" technique to be able to inspect and modify pages coming through it on the fly. Part of the process of "installing" or setting up the proxy involves generating a "root" certificate which is to be installed in the browser and every time a new domain is hit via HTTPS through the proxy, the proxy generates a new site certificate on-the-fly (and caches all certificates generated to disk so it doesn't have to re-generate certificates for domains for which certificates have already been generated) signed by the root certificate and uses the site certificate to communicate with the browser. (And, of course, the proxy forges its own HTTPS connection to the remote server. The proxy also checks the validity of the server certificate if you're curious.)
Well, it works great with the browser surf. (And, this might be relevant -- as of a few versions back, at least, surf didn't check/enforce certificate validity. I can't attest to whether that's the case for more recent versions.) But, Firefox gives a SEC_ERROR_REUSED_ISSUER_AND_SERIAL error on the second (and all later) HTTPS request(s) made through the proxy and Chromium (I haven't tested with Chrome proper) gives NET::ERR_CERT_COMMON_NAME_INVALID on every HTTPS request. These obviously present a major problem when trying to browse through my intercepting proxy.
The SSL library I'm using is pyOpenSSL 0.14 if that makes any difference.
Regarding Firefox's SEC_ERROR_REUSED_ISSUER_AND_SERIAL error, I'm pretty sure I'm not reusing serial numbers. (If anybody wants to check my work, that would be pretty rad: cert.py - note the "crt.set_serial_number(getrandbits(20 * 8))" on line 168.) The root certificate issuer of course doesn't change, but that wouldn't be expected to change, right? I'm not sure what exactly is meant by "issuer" in the error message if not the root certificate issuer.
Also, Firefox's "view certificate" dialog displays completely different serial numbers for different certificates generated by the proxy. (As an example, I've got one generated for www.google.com with a serial number of 00:BF:7D:34:35:15:83:3A:6E:9B:59:49:A8:CC:88:01:BA:BE:23:A7:AD and another generated for www.reddit.com with a serial number of 78:51:04:48:4B:BC:E3:96:47:AC:DA:D4:50:EF:2B:21:88:99:AC:8C .) So, I'm not really sure what Firefox is complaining about exactly.
My proxy reuses the private key (and thus public key/modulus) for all certificates it creates on the fly. I came to suspect this was what Firefox was balking about and tried changing the code to generate a new key pair for every certificate the proxy creates on the fly. That didn't solve the problem in Firefox. I still get the same error message. I have yet to test whether it solves the Chromium issue.
Regarding Chromium's NET::ERR_CERT_COMMON_NAME_INVALID error, the common name for site certificate is just supposed to be the domain, right? I shouldn't be including a port number or anything, right? (Again, if anybody would like to check my work, see cert.py .) If it helps any, my intercepting proxy isn't using any wildcards in the certificate common names or anything. Every certificate generated is for one specific fqdn.
I'm quite certain making this work without making Firefox or Chrome (or Chromium or IE etc) balk is possible. A company I used to work for purchased and set up a man-in-them-middling proxy through which all traffic from within the corporate network to the internet had to pass. The PC administrators at said company installed a self-signed certificate as a certificate authority in every browser on every company-owned computer used by the employees and the result never produced any errors like the ones Firefox and Chromium have been giving me for the certificates my own intercepting proxy software produces. It's possible the PC administrators tweaked some about:config settings in Firefox to make this all work or something, but I kindof doubt it.
To be fair, the proxy used at this company was either network or transport layer, not application layer like mine. But I'd expect the same can be accomplished in an application-layer HTTP(s) proxy.
Edit: I've tried setting the subjectAltName as suggested by brain99. Following is the line I added in the location brain99 suggested:
r.add_extensions([crypto.X509Extension(b"subjectAltName", False, b"DNS:" + cn.encode("UTF-8"))])
I'm still getting SEC_ERROR_REUSED_ISSUER_AND_SERIAL from Firefox (on the second and subsequent HTTPS requests and I'm getting ERR_SSL_SERVER_CERT_BAD_FORMAT from Chromium.
Here are a couple of certificates generated by the proxy:
google.com: https://pastebin.com/YNr4zfZu
stackoverflow.com: https://pastebin.com/veT8sXZ4
I noticed you only set the CN in your X509Req. Both Chrome and Firefox require the subjectAltName extension to be present; see for example this Chrome help page or this Mozilla wiki page discussing CA required or recommended practices. To quote from the Mozilla wiki:
Some CAs mistakenly believe that one primary DNS name should go into
the Subject Common Name and all the others into the SAN.
According to the CA/Browser Forum Baseline Requirements:
BR #9.2.1 (section 7.1.4.2.1 in BR version 1.3), Subject Alternative
Name Extension
Required/Optional: Required
Contents: This extension MUST contain at least one entry. Each entry MUST be either a dNSName containing the Fully-Qualified Domain Name or an iPAddress containing the IP address of a server.
You should be able to do this easily with pyOpenSSL:
if not os.path.exists(path):
r = crypto.X509Req()
r.get_subject().CN = cn
r.add_extensions([crypto.X509Extension("subjectAltName", False, "DNS:" + cn])
r.set_pubkey(key)
r.sign(key, "sha1")
If this does not solve the issue, or if it only partially solves it, please post one or two example certificates that exhibit the problem.
Aside from this, I also noticed you sign using SHA1. Note that certificates signed with SHA1 have been deprecated in several major browsers, so I would suggest switching to SHA-256.
r.sign(key, "sha256")
I have followed some tutorials in adding a self-cert to my local environment, mainly this one Getting Chrome to accept self-signed localhost certificate. I've added my certificate to the keychain and it is now marked as trusted for all users. But I'm still getting the error below. I've never tried to do this before and unsure now of the problem and how to fix it. Any advice would be great
Update
Steffen's answer helped me fixed this so I have awarded the points to him. I used this link for reference https://serversforhackers.com/video/self-signed-ssl-certificates-for-development
While in theory a certificate for *.dev will match test.dev in practice most browsers do not allow wildcards on the second level, i.e. *.com or *.org. It does not help that dev is currently not a public top level domain.
I recommend that you instead issue yourself the certificate without the wildcard (test.dev) or use an additional level for your hostnames.
Win 8.1 / VS for Web 2012
Hello,
I've recently published a VERY simple WCF (just 1 method) to localhost (IIS - Default Web Site). When I publish I'm returned the following:
=== Publish: 1 succeeded, 0 failed, 0 skipped ===
So one would think I could easily add a reference from my ASP.NET project, by right-clicking and adding a service reference. NOPE! When I get the dialog box, it can't find anything in localhost.
So I poked around in Service via Control Panel, thinking I might have to start the service. Not even listed in there.
It's been quite a while since I've worked with WCF's and ASP, can someone help me out and tell me what I'm doing wrong?
Thanks,
Jason
First you need to identify, exactly, on which port you have published your service. Just go to IIS Manager Console and look around. Open Props window and see needed stuff (url, port)
Verify your service works OK, just post the url to any browser you have.If all is OK you'll get some peace of information of service about how to create a proxy,etc.
Verify service operations work OK. This can be done by 2 ways at least : using Fiddler Composer, or more sweet thing for these cases like WCF test Client
Open your VS. Navigate to solution Service Reference folder. Add Service Reference. Pass link you got from step #1. (If I'm not mistaken 'discover' button does not take any affect, as it looks through possible endpoints inside project, NOT AT IIS)
That's all. hope that helps.
Im trying to make connections 4.5 working with content manager. I guess im quite far away from start finally but there are many things i need to fix.
Sometimes my widgets just doesnt load. It says cannot load widgets-config.xml
when i restart deployment and appsrv everything looks good.
My biggest problem is to add library to community. Because i want to see how workflow works and the id like to create linked library of this. This is what i get when i try to add library widget to the community (linked library widget works well)
CLFWZ0004E: Event 'widget.added' sent to remote lifecycle handler at https://conserv.egroup.local/dm/atom/communities/feed returned bad response: 403 - Forbidden
I guess there is som problem with https access. Can anybody of you guys ever faced this problem? Some hints?
UPDATE-1
After accessing that page from it gives me this :
<td:error>
<td:errorCode>UnsupportedOperation</td:errorCode>
<td:errorMessage>CQL5602: The attempted operation, GET, is not allowed on the resource /communities/feed.
Contact your administrator and provide the incident ID: 1381320497551.
The administrator should forward this information to the application owner.
</td:errorMessage>
</td:error>
So i guess maybe there can be som problem with proxy policies. I tried to make some changes with changes default policy url to *. But still no progress..
Hints?