What is the difference between ingress and reverse proxy - kubernetes-ingress

I have used a few reverse proxies, such as HAProxy, Traefik and Kong. And when I started working with Kubernetes, I was confused by the concept of Ingress. Aren't the routings to back-end resources also achievable via reverse proxies? What is the purpose of using Ingress?

You can use any reverse proxy you want (for example kube-nginx-proxy).
In this case you need to perform some configuration steps, which could take a lot of time.
Ingress is specially designed for fast setup. The only thing you should do is to describe your configuration using yaml, which is easier and faster than third-party solutions.
Also you can use, for example, Istio for this purpose, in which case Ingress is already integrated.

Related

Why HashRouter in react-router v6 is not recommended?

On every react-router v6 documentation page which mentions HashRouter there is a short warning text stating that this kind of routing is not recommended. There is no explanation why.
Are there any major disadvantages? Does it break any api somehow?
Short answer.... some devs think hash routing produces "ugly" URLs... but no, really, hash routing serves a purpose where perhaps the server environment isn't setup to handle current HTML or otherwise needs to handle all page requests at a static URL.
This is about as much explanation as the docs provide.
HashRouter
<HashRouter> is for use in web browsers when the URL should not (or
cannot) be sent to the server for some reason. This may happen in some
shared hosting scenarios where you do not have full control over the
server. In these situations, <HashRouter> makes it possible to store
the current location in the hash portion of the current URL, so it is
never sent to the server.
It's basically, "Only use hash routing if that's what you need and you know what you are doing." I think it's generally the case that if you don't really know what you are doing or need that you really just need the BrowserRouter.
Are there any major disadvantages? Does it break any api somehow?
I wouldn't say there are major disadvantages to using the HashRouter, it just serves a different purpose, like the NativeRouter on native mobile devices, or the MemoryRouter in node environments. I don't know if you are asking if it breaks any specific APIs using the HashRouter, but I'm inclined to say no, it still works with redux, fetch/axios, etc... and just about anything else I can think of I've used along with hash routing.
Short answer: If your site is static, use whatever, it does not matter. But if you have a backend, using hash routing is recommended aproach, and not just for react.
Explanation: When hash is used, only your frontend application gets the request, no calls are made to backend. This is important for production enviroments, where you have some backened and/or some reverse proxy(like NGINX etc.), API Gateway etc. Without hash, request will need to be handled by them first, and if no endpoint is found, request is sent to frontend. This creates unnecessary calls, which leads to performance issues, unhandled paths, etc.. And in modern cloud enviroments, this means more money.

Can i back multiple ingress class' with one ingress controller

Hoping someone here can help.
I have a use case, where I am exposing an NGINX ingress controller with both internal and external LBs. The internal LB is used for things that sit outside of K8S, but within the same network to talk via ingress to things inside K8S, whilst still leveraging our NGINX configurations.
The challenge :
I want to be able to restrict endpoints which we are exposing only to internal systems, so that they can not be accessed via the external LBs (it would be pretty easy for someone to hit the external LB with the correct host headers otherwise and still access the applications behind them).
Does anyone know of any way to do this, which does not involve having to stand up a duplicate NGINX deployment entirely. IE, was hoping to be able to have an ingress class defined which would use exclusively the service with the internal LB rather than the external one.

What is difference between json-rpc and json-api?

Could anybody explain advantage of using json-rpc over json-api and vice-versa? First and second format are JSON-based, but where I should use one, and where is another?
Note: I may come across a little biased. I am the author of the Json-RPC.net server library.
Json-RPC is a remote procedure call specification. There are multiple libraries you can use to communicate using that protocol. It is not REST based, and is transport agnostic. You can run it over HTTP as is very common, you can also use it over a socket, or any other transport you find appropriate. So it is quite flexible in that regard. You can also do server to client along with client to server requests with it by hosting the RPC server on either the client or the server.
Json-API is a specification for building REST APIs. There are multiple libraries you can use to get started with it. In contrast to Json-Rpc it requires you to host it on an HTTP server. You cannot invoke functions on the client with it. You cannot run it over a non-http transport protocol. Being REST based, it excels at providing information about Resources. If you want an API that is based around the idea of Create, Read, Update, Delete on some collections of resources, then this may be a good choice.
Json-API is going to be better if your API is resource-based, and you want your API to be browsable by a human without setting up documentation for it. Though that human would likely need to be in the software engineering field to make any sense of it.
Json-RPC is going to be better if your API is function based, or you want the flexibility it provides. Json-RPC can still be used to manipulate Resources by creating Create, Read, Update, and Delete functions for your resources, but you don't get the browsability with it not being REST based. It can still be explored (not browsed) by a human by generating documentation based off of the functions you expose.
A popular example of something that uses Json-Rpc is BitCoin.
There are a lot of popular REST-based API's and Json-API is a spec with a bunch of tools to help you do REST right.
--
Note: Neither of those (Json-RPC, or Json-API) are good when you consider for developer time, performance, or efficiently using network resources.
If you care about performance, efficiency, or developer time then take a look at Google's gRPC which is fantastic in those regards, and can still reduce developer time more than using a REST API as client and server code can be generated from a protocol definition file.

What is the recommended way to watch for changes in a Couchbase document?

I want to use Couchbase but I want to implement change tracking in a few areas similar to the way RethinkDB does it.
There appears to be a hand full of ways to have changes pushed to me from a Couchbase server.
DCP
TAP
XDCR
Which one is the correct choice, or is there a better method?
UPDATE
Thanks #Kirk!
Thanks! it looks like DCP does not have a 100% production ready API today (5/19/2015). Your blog ref helped me decide to use XDCR today and migrate to DCP as soon as an official API is ready.
For XDCR this GitHub Repo has been helpful.
Right now the only fully supported way is XDCR as Kirk mentioned already. If you want to save time implementing it, you might want to base your code on this: https://github.com/couchbaselabs/couchbase-capi-server - it implements server side of the XDCR protocol (v1). The ElasticSearch plugin is based on this CAPI server, for example. XDCR is a good choice if your application is a server/service that can wait for incoming connections, so Couchbase (or the administrator) controls how and when Couchbase replicates data to your service.
Depending on what you want to accomplish, DCP might end up being a better choice later, because it's conceptually different from XDCR. Any DCP-based solution would be pull-based (from your code's side), so you have more fine-grained, programmatical, control over how and when to connect to a Couchbase bucket, and how to distribute your connections across different processes if necessary. For a more in-depth example of using DCP, take a look at the Couchbase-Kafka connector here: https://github.com/couchbase/couchbase-kafka-connector
DCP is the proper choice for this if how it works fits your use case and you can write an application to consume the stream as there is no official API...yet. Here is a blog post about doing this in java by one of the Couchbase Solutions Engineers, http://nosqlgeek.blogspot.de/2015/05/dcp-magic.html
TAP is basically deprecated at this point. It is still in the product, but DCP is far superior to it in most every fashion.
XDCR could be used, as it uses DCP, but you'd have to write a plug-in for XDCR. So you'd just be better off to write one directly to consume the DCP stream.

Using CouchDB to serve HTML

I'm trying to use CouchDB with HTML/standalone REST architecture. That is, no other app server other than CouchDB and ajax style javascript calling CouchDB.
It looks like cross scripting is a problem. I was using Cloudkit/Tokyo Cabinet before and it seems like the needed callback function was screwing it up in the URL.
Now I'm trying CouchDB and getting the same problem.
Here are my questions:
1) Are these problems because the REST/JSON store like CouchDB or CloudKit is running on a different port from my web page? They're both run locally and called from "localhost".
2) Should I let CouchDB host my page and serve the HTML?
3) How do I do this? The documentation didnt seem so clear...
Thanks,
Alex
There is a simple answer: store static HTML as attachments to CouchDB documents. That way you can serve the HTML directly from the CouchDB.
There is a command-line tool to help you do this, called CouchApp
The book Mikeal linked to also has a chapter (Managing Design Documents) on how to use CouchApp to do this.
3) you can use CouchDB shows to generate HTML (or any content type)
There are huge advantages to having CouchDB serve/generate your HTML.
For one thing, the pages (which are HTTP resources) are tied to the data or to the queries on the data and CouchDB knows when to update the etag when the page has changed. This means that if you stick nginx in front of CouchDB and say "cache stuff" you get all the free caching you would normally need to build yourself.
I would push for nginx > apache in front of CouchDB because Apache isn't all that great at handling concurrent connections and nginx + erlang (CouchDB) are great at it.
Also, you can write these views in JavaScript which are documented well in the CouchDB book http://books.couchdb.org/relax/ or in Python using my view server http://github.com/mikeal/couchdb-pythonviews which isn't really documented at all yet but I'll be getting to it soon :)
I hope that view servers in other languages start implementing the new features in the view server protocol as well so that anyone can write standalone apps in CouchDB.
I think one way is thorugh mod_proxy in Apache. It forward the request from Apache to Couchdb so may solve the cross scripting issue.
# Configuration file for proxy
ProxyVia ON
ProxyPass /couchdb http://<<couchdb host>>:5984/sampleDB
ProxyPassReverse /couchdb http://<<couchdb host>>:5984/sampleDB
I can't help thinking you need some layer between the presentation layer (HTML) and the model (CouchDB).
That way you can mediate requests, provide additional facilities and functionality. At the moment you seem to be rendering persisted objects direct to the presentation layer, and you'll have no facility to change or extend the behaviour of your system going forward.
Adopting a model-view-controller architecture will insulate your model from the presentation layer and give you some flexibility going forwards.
(I confess I can't advise on your cross-site-scripting issues)