New to openshift so I am bit confused if we can run user pods on master or infra nodes. We have 2 worker nodes and one master and infra nodes each making 4 nodes. The reason for change is to share the loads between all 4 nodes rather than 2 compute nodes.
By reading some documents it seems possible to assign 2 roles to one node but is there any security risk or is it not best practice?
Running on openshift version v3.11.0+d699176-406
if we can run user pods on master or infra nodes
Yes, you absolutely can, the easiest way is to configure that at installation time, refer to https://docs.openshift.com/container-platform/3.11/install/example_inventories.html#multi-masters-using-native-ha-ai for example
is there any security risk or is it not best practice
Running single master node or a single infra node is already a risk to high availability of your cluster. If master fails - your cluster is basically headless, if infra node fails - you lose your internal registry and routers - thus losing an external access or an ability to create new images for your imagestreams. This also applies to host OS upgrades, you will have to reboot master and infra nodes some day, are you okay with having a guaranteed downtime during patching? What if something goes wrong during update?
Regarding running user workload on master and infra nodes - if you are not running privileged SCCs (which can allow to run privileged pods or using any uids on system etc) - you are somewhat safe from container breach, assuming there are no known bugs in container engine you are using. However you should pay close attention to your resource consumption and avoid running any workloads without CPU and memory limits, because overloading master node may result in cluster degradation. You should also monitor disk usage, since running user pods results in more images loaded to your docker storage.
So basically it boils down to this:
Better have multiple master (ideally 3) and couple of infra nodes than a single point of failure for both of these roles. Having separate master and worker nodes is of course better than hosting them together, but multimaster setup even with user workloads should be more resilient if you are watching resource usage carefully.
Related
For the past two years, I've been testing the performance of our software in the Google Compute Engine (GCE), using up to a total of 1k vCPUs for the worker-VMs and the VMs for MySQL NDB Cluster that we use.
I have automated the creation of worker VMs (that run our software) with the help of templates and instance groups, but for the MySQL NDB cluster I've always had two fixed machines which I occasionally resized manually (i.e. change the vCPUs and RAM).
I am now trying to bring up a variable number of VMs for the NDB cluster with different amounts of vCPUs and RAM, and I'd like to automate this process in my test suite. The main problem is that the NDB config expects fixed HostNames in the config. To use GCE templates and instance groups, I would be able to bring up only dynamically named instances like ndb-1ffrd, ndb-i8fhh, ....
To exemplify, here is my current ndb config (one of many) that involves fixed VMs ndb1 and ndb2:
[ndbd]
HostName=ndb1
NodeId=1
NodeGroup=0
[ndbd]
HostName=ndb2
NodeId=2
NodeGroup=0
[ndbd]
HostName=ndb1
NodeId=3
NodeGroup=1
[ndbd]
HostName=ndb2
NodeId=4
NodeGroup=1
I'd like to convert the fixed VMs ndb1/ndb2 into GCE instance groups where I can choose to bring an arbitrary number of such instances up (typically 2 or 4 though) for a test and then destroy the VMs afterwards, and perform this on-demand in an automated fashion during my tests. Reasoning behind this is to have repeatable tests with differently configured VMs. Changing many parameters manually over several tests leads to a nightmare when figuring out what the exact configuration was 10 tests ago -- this way each test would refer to a specific instance template for the ndb VMs.
However, GCE instance group members have a random suffix in their name and the NDB config expects fixed HostNames or IPs. So I'd need to either:
Have GCE generate instances from instance groups named in a deterministic way (e.g. ndb1, ndb2, ndb3, ...), so that I can rely on those names in my ndb configs, or
Somehow allow arbitrary hosts (or hosts with arbitrary suffixes) to connect as ndb nodes but still make sure that the same host isn't added to the same NodeGroup more than once -- something that is manually ensured in the above sample config.
Is there any way to achieve what I'm trying to do?
I think that this may be achieved using scripts along with gcloud SDK, gcloud let's you to launch resources in GCP, there are also many options/flags that may be helpful to set the configurations that you need.
Hope this helps.
I have created an Amazon Aurora Database cluster runing MySQL with three instances: the main instance that backs the cluster and two read replicas for balancing. However, the cluster does not seem to be balancing the reads at all. I have one replica managing 700+ Selects/sec maximizing the CPU at 99.75% or higher while the other replica is doing virtually nothing with a CPU usage of 4% at 1 select per second, if that. The main cluster instance itself is at 33% CPU usage as it is being written to simultaneously while the replicas should are being read from. The lag time between the replicas is under 20 milliseconds. My application is querying the read only endpoint of the cluster but no balancing appears to be happening. Does anyone have any insight into why this may be happening or why the replica is at such a high CPU usage? The queries being ran against it are not complex by any means.
Aurora Cluster endpoints are DNS records and they only do DNS round robin during resolution. This means that when your client application opens connections to a cluster endpoint, you end up resolving the endpoint to different instances (different IPs basically), there by striping your connections across multiple replicas. Past that point, there is no load balancing. Connections are striped across instances, and queries run on each of those connections go to the corresponding instance backing it.
Now consider the scenario where your connection pool was already created to the cluster endpoint when you have one instance behind it. Now, if you add more instances, there will be no impact to your application, unless you terminate your connection and reestablish them. You would do a DNS round robin again, and this time some of your connections would land on the new instance that you provisioned.
Few callouts:
In Aurora, you have 2 cluster endpoints. One (RW) endpoint always points to the current writer and one (RO) does the DNS round robin between your read replicas.
Also, DNS propagation might take a few seconds when failovers happen, so that occasional errors are quite natural when failovers occur.
Hope this helps.
We've implemented a driver to try to mitigate this problem, with some visible gains: https://github.com/DiceTechnology/dice-fairlink
It regularly discovers the read-replicas to catch up with cluster changes and round-robins connections among them.
Despite not measuring any CPU utilisation, we've observed a better load distribution than with the native DNS based round-robin of the cluster reader endpoint
The Aurora's DNS based load balancing works at the connection level (not the individual query level). You must keep resolving the endpoint without caching DNS to get a different instance IP on each resolution. If you only resolve the endpoint once and then keep the connection in your pool, every query on that connection goes to the same instance. If you cache DNS, you receive the same instance IP each time you resolve the endpoint.
Unless you use a smart database driver, you depend on DNS record updates and DNS propagation for failovers, instance scaling, and load balancing across Aurora Replicas. Currently, Aurora DNS zones use a short Time-To-Live (TTL) of 5 seconds. Ensure that your network and client configurations don’t further increase the DNS cache TTL. Remember that DNS caching can occur anywhere from your network layer, through the operating system, to the application container. For example, Java virtual machines (JVMs) are notorious for caching DNS indefinitely unless configured otherwise. Here are AWS documentation and Aurora whitepaper on configuring DNS cache ttl.
My guess is that you are not connecting to the cluster endpoint.
Load Balancing – Connecting to the cluster endpoint allows Aurora to load-balance connections across the replicas in the DB cluster. This helps to spread the read workload around and can lead to better performance and more equitable use of the resources available to each replica. In the event of a failover, if the replica that you are connected to is promoted to the primary instance, the connection will be dropped. You can then reconnect to the reader endpoint in order to send your read queries to the other replicas in the cluster.
New Reader Endpoint for Amazon Aurora – Load Balancing & Higher Availability
[EDIT]
To load balance within a single application, you will need to reconnect to the endpoint. If you use the same connection for all queries only one replica will be responding. However, opening connections is expensive so this might not provide much benefit unless your queries take some time to run.
We are building a small advertising platform that will be used on several client sites. We want to setup multiple servers and load balancing (using Amazon Elastic Load Balancer) to help prevent downtime.
Our basic functions include rendering HTML for ads, recording click information (IP, user-agent, location, etc.), redirecting traffic with their click ID as a tracking variable (?click_id=XX), and basic data analysis for clients. It is very important that two different clicks don't end up with the same click ID.
I understand the basics of load balancing, but I'm not sure how to setup the MySQL server(s).
It seems there are a lot of options out there: master-master, master-slave, clusters, shards.
I'm trying to figure out what is best for us. The most important aspects we are looking for are:
Uptime - if one server goes down, automatically get picked up by another server.
Load sharing - keep CPU and RAM usage spread out.
From what I've read, it sounds like my best option might be a Master with 2 or more slaves. The Master would not be responsible for any HTTP traffic, that would go to the slaves only. The Master server would therefore only be responsible for database writes.
Would this slow down our click script? Since we have to insert first to get a click ID before redirecting, the Slave would have to contact the Master and then return with the data. Right now our click script is lightning fast and I'd like to keep it that way.
Also, what happens if the Master goes down? Would a slave be able to serve as the Master until the Master was back online?
If you use Amazon's managed database service, RDS, this will take a lot of the pain out of managing your database.
You can select the multi-AZ option on your master database instance to provide a redundant, synchronously replicated slave in another availability zone. In the event of a failure of the instance or the entire availability zone Amazon will automatically flip the A record pointing to your master instance to the slave in the backup AZ. This process, on vanilla MySQL or MariaDB, can take a couple of minutes during which time your application will be unable to write to the database.
You can also provision up to 5 read replicas for a MySQL or MariaDB instance that will replicate from the master asynchronously. You could then use an Elastic Load Balancer (or other TCP load balancer such as HAProxy or MariaDB's MaxScale for a more MySQL aware option) to distribute traffic across the read replicas. By default each of these read replicas will have a full copy of the master's data set but if you wanted to you could attempt to manually shard the data across these. You'd have to have some more complicated logic in your application or the load balancer to work out where to find the relevant shard of the data set though.
You can choose to promote a read replica into a stand alone master which will break replication to the master and give you a stand alone cluster which can then be reconfigured as to your previous setup (or something different if you want and just using the data set you had at the point of promotion). It doesn't sound like something you need to do here though.
Another option would be to use Amazon's own flavour of MySQL, Aurora, on RDS. Aurora is completely MySQL over the wire compatible so you can use whatever MySQL driver your application already uses to talk to it. Aurora will allow up to 15 read replicas and completely transparent load balancing. You simply provide your application with the Aurora cluster endpoint and then any writes will happen on the master and any reads will be balanced across however many read replicas you have in the cluster. In my limited testing, Aurora's failover between instances is pretty much instant too so that minimises down time in the event of a failure.
I run a pretty customized cluster for processing large amounts of scientific data based on a basic LAMP design. In general, I run a separate MySQL server with around 128GB of ram and about 1TB of storage. Separately, I run a head node that serves as an nfs mount point for the data input of my process, and a webserver to display results. Finally, I trypically have a few compute nodes that get their jobs from a mysql table, get the data from NFS, do some heavy lifting, then put results into mysql.
I have come across a dataset I would like to process which is pretty large (1TB of input data), and I don't really have the hardware on hand to handle it. As a result, I began investigating google compute engine etc, and the prospect of scaling instances to process these data rapidly with the results stored in a mysql instance. Upon completion the mysql tables could be dumped from the cloud and brought up locally for analysis. I would have no problem deploying a MySQL server, along with the rest of the LAMP pieces and the compute nodes, but I can't quite figure out how I would do this in the cloud.
A major sticking point seems to be the lack of read/write NFS which would allow me to get the data onto several instances, crunch it, then push the results to MySQL. This is a necessary step for me as I could queue hundreds of jobs from the webserver, then have the instances (as many as 50-100) pick the jobs up by connecting to a centralized mysql instance to find out what jobs an instance needs to do and where the data is. Process the data (there is a file conversion that happens which make the write part necessary), crunch the data, then load results to mysql. I hope I'm explaining my situation clearly. This seems like a great example of a CPU intensive process that would scale nicely in the cloud, I just can't seem to put all the pieces together... Any input is appreciated!
It sounds quite possible; I've been doing similar things in GCE for a while now.
NFS mount - you just need configure it as you would normally. Set up the NFS server on the head node, and then configure the clients on the slave nodes to mount it. Here and here are some basic configuration instructions for Centos 6 I used to get NFS up and running.
Setting up a LAMP stack is very straightforward. These machines run pretty much vanilla Linux distros, so you can just use yum or apt-get to install components.
For the cluster, you will probably end up having an image for the head node you use once, and then another image for the slave nodes that you replicate for each one.
For the scheduler, I've used Condor and Sge successfully, but I'm sure the other ones would work just as well.
Hope this helps.
I have read MongoDB's replication docs and MySQL's Cluster page however I cannot figure out how to configure my web apps to connect to database.
My apps will have connection information, database host, username, password etc, however, even if there is multi server function, should I need a big master that has a fixed ip that distirbutes the load to servers? Then, how can I prevent single-point of failure? Is there any common approaches to that problem?
Features such as MongoDB's replica sets are designed to enable automatic failover and recovery. These will help avoid single points of failure at the database level if properly configured. You don't need a separate "big master" to distribute the load; that is the gist of what replica sets provide. Your application connects using a database driver and generally does not need to be aware of the status of individual replicas. For critical writes in MongoDB you can request that the driver does a "safe" commit which requires data to be confirmed written to a minimum number of replicas.
To be comprehensively insulated from server failures, you still have to consider other factors such as physical failure of disks, machines, or networking equipment and provision with appropriate redundancy. For example, your replica sets should be distributed across more than one server or instance. If all of those instances are in the same physical colocation facility, your single point of failure could still be the hopefully unlikely (but possible) case where the colocation facility loses power or network.