"Route not admitted by a router" on Openshift Online v3 - openshift

I have an application deployed on Openshift Online v3 starter plan which (used to) run well until yesterday. Yesterday I had to publish a new version of my application. Apparently, the platform encountered some problems redeploying it, I had to cancel some processes which seemed locked or continually restarting.
Finally I managed to have my pod running with the new version, the logs look fine.
The issue now is that my app is no longer exposed. When hitting the URL which was assigned to me, I got the infamous "not available" OO page:
Application is not available
The application is currently not serving requests at this endpoint. It may not have been started or is still starting.
Possible reasons you are seeing this page:
The host doesn't exist. Make sure the hostname was typed correctly and that a route matching this hostname exists.
The host exists, but doesn't have a matching path. Check if the URL path was typed correctly and that the route was created using the desired path.
Route and path matches, but all pods are down. Make sure that the resources exposed by this route (pods, services, deployment configs, etc) have at least one pod running.
I checked these 3 suggestions, and got sure that my host existed, that the path was correct, and that my pods where up. So, not understanding what the real issue was, I dropped the existing route and created a new one.
It's been 2 hours now, and the route UI keeps displaying this message:
The route is not accepting traffic yet because it has not been admitted by a router.
My understanding is that the router which should admit my route is not part of my project, it is managed by Openshift Online, am I right ?
So what could I do now to unlock my new route ?
Thanks for your suggestions

There it is ! My application is reachable again at last : It took 2 days for the router to setup my route. No action required from my part.
But the starter platform is still experiencing difficulties, so I will avoid any redeployment until the status turns green again.

Related

Encountered problem while integrating devstack - osm (open source mano)

I'm currently trying to develop a cloud in my pc using virtual box. The idea is that I have 2 virtual machines, one which devstack installed (all in one) and the other with osm mano. Right now both have everything installed. Hence, I can log in to mano via user and password 'admin' as well as to devstack.
Current properties:
VM1 (devstack): IP (enp0s8) -> 192.168.56.101
Login to 192.168.56.101 -> correct
VM2 (mano): IP (enp0s8) -> 192.168.56.105
Login to 192.168.56.105 -> correct
As some of you may guess, I have 2 network interfaces in every vm, the first one being NAT (enp0s3 with 10.0.2.15 IP) and the second one being Host Only (192.168.56.x according to virtual box).
Needless to say, I can ping from one virtual machine to another without any problem.
Now, in the past I've being using devstack (ubuntu 18.04) in order to play with it a little bit, learn how to deploy instances, create groups and so on. Indeed, I developed a topology with an instance as a router and nagios as the monitoring tool system. It worked and I learnt a lot!
Anyway, what I want in this case is starting from scratch (scratch meaning having downloaded mano and devstack but without going further). So here I am, trying to integrate OSM with Devstack, making use of osm-vim command as it is:
osm vim-create --name openstack-site --user admin --password my_openstack_password --auth_url http://192.168.56.101:5000/v3 --tenant admin --account_type openstack
In this case, my openrc file (downloaded from horizon) resulted in my auth_url being:
export OS_AUTH_URL=http://192.168.56.101:5000/v3
What I'm trying to get my head into is how it's possible that this doesn't work, as whenever I log-in to mano web interface (after osm-vim command) I go to VIM accounts and operational state equals to "error".
Any kind of help would be much appreciated, as I've being struggling for a week now.
Thanks in advance!
I had the same problem. At the beginning I thought It was a network problem, but finally I found out It was due to a SSL problem. The most easy solution is to put a specific flag to avoid the SSL verification until the developers fix it. "--config '{insecure: True}'"
I also encountered this problem when I finished installing OSM-10 and OpenStack-Ussuri for Ubuntu18.04 some days ago. I solved this problem by change the url "--auth_url http//:192.168.23.18:5000/v3" to "-- auth_url http//:controller:5000/v3" and put "192.168.23.18 controller" in the ro container "/etc/hosts". The "controller" here is the host name where you install your openstack and which is used is your keystone authentication urls. Maybe you also have solved this problem but this problem is so troublesome and I hope more people do not be annoyed at this~

Deploying a .NET application to Google Cloud Platform worked on initial deploy, now getting 502 Error: Server Error

To give some background, I'm an (unpaid) intern, and I'm unrelated to dealing with this kind of stuff. My employers wanted to update some pictures, and they did locally but didn't know how to upload the new version to the server.
I used the publish settings that were saved in Visual Studio from when the previous intern deployed the server (he was specialized in web site stuff) and it worked on deploy ... then I refreshed the page and I'm getting 502 server error.
Steps I have taken:
Connect to the VM and restart it - didn't solve it. it's using Microsoft server 2016.
Open the VM trough RDP, check if there are errors. There were 3 services not running, and I start them manually. One still isn't running, Downloaded Maps Manager. Ok... I google it and it's not a necessary service so I disable it. Now there are no errors and all services are running but I still am getting this error.
I tried pinging the IP of the server, and the URL itself and it works.
I believe it might be something to do with the load balancer, but I had one HTML class and nothing dealing with actually publishing stuff. If you could point me in the right direction I would appreciate it. The only reason why I'm trying to fix this myself is that I didn't make some kind of backup, and I feel so stupid having taken the site down.
Edit: I've gone to "load balancing" and it says service unhealthy. I tried going to the IPs there and it brings me to the same 502 server error page. From what I've gathered this is a configuration error, it's impossible they messed something with the site itself, right? It did work that first time, and if I run it from Visual Studio it works on the local machine ...

Openshift 3 Online Starter & Routing

I have a starter (free) tier account with Openshift online. I have an application consisting of two pods, a Node and a Mongo. The pods build and deploy; from the terminal that executes in the web console on the running Node pod I can run curl localhost:8080 and the Node process obligingly spits back my base page.
I have a route that was autogenerated; the web console gives me a link to <myappname>.stuff.starter-east-1.openshiftapps.com and appears to correctly reference the Node service that sits on top of the running Node pod.
However, when I point my browser at that hostname, I get the Openshift error page that tells me that either the route or path was not typed correctly, or the pod isn't running.
I have tried this with my own code and with the example node packages and I see the same thing.
When I use the oc tool to query things about my application, I see that I don't have a router resource - but the route claims to have been exposed on a router. So I think I'm using some kind of default router in the node, and I don't have to launch one in my project, but I'm not sure. Most of the other questions around this topic are for people using the Enterprise product and running on their own hardware, where they have more control at the admin layer over the router package; all the suggestions seem to imply that for the Online product this 'just works'. Any ideas what I am missing?
Update : After some period of time, the example project did work and a browser request was serviced with the basic example page. Looking at the two setups I cannot see any differences, or why my route (in my custome app) never gets activated but the sample project route does.
Turns out the issue was that my application (node) was listening to localhost:8080, when it needs to be listening to 0.0.0.0:8080. I'm not enough of a networking guy to explain why that matters to the router but it does.

Intermittent MySQL Connection Error in Azure Website

I'm getting the following error intermittently when making a call from my ASP.Net MVC web application which is using Dapper to query MySQL.
Unable to connect to any of the specified MySQL hosts.
The exception only occurs when my web app is published to Azure. It has worked 100% of the time when I run the code locally. I've deployed the code to a second azure website, and also get the exception there, again intermittently.
The MySQL database is running on an Azure VM (Ubuntu). This server also has some R scripts that access the database, which are being run at a set interval. I've had no connectivity issue with these either. It is just the .Net code that's struggling.
I've scoured the web, but don't feel like I've turned up anything of value. Most of the links have pointed to a connection string problem, but since it works intermittently that doesn't seem to fit my problem. Some links have referenced DNS issues, but I'm getting the same problem when I use the IP Address instead of the machine name for the DNS server.
I'm sure I need to track down more information, but I'm not sure where it would be. This is my first foray into using a MySQL db in this fashion, and I'm not familiar with config options or log files on that side of things. I feel similarly about Azure websites with database interactions too.
What can I try next?
Just to drive home the point about this error being intermittent, here's a screenshot from the Runscope job that's hitting the page (thus triggering the MySQL query) every 5 minutes:
I was able to fix (or perhaps "circumvent") this problem by adding the --skip-host-cache flag to our mysql configuration file. I still don't fully understand what the root of the problem is, but we haven't had any issues with MySQL connectivity from the Azure website since adding that.

trouble with hg serve on the lan

Our team started using mercurial about a month ago and it was a rough start, but it's working out well now. At the end of last week though, we suddenly had issues pulling from each other's repositories.
Normally, I would pull from, for example, prog12:800, and it would work great. Now, I get the message
URLError: [Errno 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
The hg server is running, and it's not a firewall issue. This issue only occurs when trying to access my repo and two other people's. Accessing everyone else's, and the one on our webdev server, is fine. We are all on the same lan (though two of us connect via vpn) We all have the same issue - from my own computer, i can type in my computer name:8000 and it works, but no one else can see it.
I appreciate any suggestions!
It it possible your IT department deployed something that's acting as a firewall on each machine? Being able to connect to your own port 8000, but not others' just screams firewalls.
That said, most people don't actually run hg serve on developer boxes. Instead you let each developer freely create repos on the "central" "webdev" box. So I might create 'work-in-progress-ry4an' and do push/pull from there, and other can pull from it.
The hg serve functionality is a great way to pass someone some quick changesets, but not built to be used as an always-on server.