I've been researching the new options to run Docker containers in Google Cloud Run, however, there seems to be no advice on whether or not one should run MySQL on Cloud run, apparently, I know it isn't a web service, and I understand in the Official Google Documentation for GCP, Google would probably just tell people to kindly use Cloud SQL (their SQL Offering), I haven't found any advice online about "running mysql on cloud run", so I thought I'd ask here.
Will startup times from cold starts decrease performance of the solution? (assuming one uses a Bucket for storing the stuff)
Running a SQL database is not a good fit for Cloud Run.
First of all, the contract between the deployed container and Cloud Run is that the container needs to run an HTTP server on port 8080. That's not really the way MySQL works.
Second of all, the container is going to be limited to the filesystem that was included in the container image. This same image is going to be instantiated many times over as the service handles load. There will be no way to persist the data written to MySQL. You could have read-only data stored in that image that only changes when a new image is published, but that's not really what you would expect to use a relational database for.
Cloud Run is really good at operating HTTP/web services in a serverless and scalable way. These web services typically make use of other APIs and service deployed to Google Cloud, or third party services. It's not really meant to offer persistent, scalable, ACID-compliant database services - this is a whole different sort of problem space.
Just for fun, I'm designing a few web applications using a microservices architecture. I'm trying to determine the best way to do configuration management, and I'm worried that my approach for configuration may have some enormous pitfalls and/or something better exists.
To frame the problem, let's say I have an authentication service written in c++, an identity service written in rust, an analytics services written in haskell, some middletier written in scala, and a frontend written in javascript. There would also be the corresponding identity DB, auth DB, analytics DB, (maybe a redis cache for sessions), etc... I'm deploying all of these apps using docker swarm.
Whenever one of these apps is deployed, it necessarily has to discover all the other applications. Since I use docker swarm, discovery isn't an issue as long all the nodes share the requisite overlay network.
However, each application still needs the upstream services host_addr, maybe a port, the credentials for some DB or sealed service, etc...
I know docker has secrets which enable apps to read the configuration from the container, but I would then need to write some configuration parser in each language for each service. This seems messy.
What I would rather do is have a configuration service, which maintains knowledge about how to configure all other services. So, each application would start with some RPC call designed to get the configuration for the application at runtime. Something like
int main() {
AppConfig cfg = configClient.getConfiguration("APP_NAME");
// do application things... and pass around cfg
return 0;
}
The AppConfig would be defined in an IDL, so the class would be instantly available and language agnostic.
This seems like a good solution, but maybe I'm really missing the point here. Even at scale, tens of thousands of nodes can be served easily by a few configuration services, so I don't forsee any scaling issues. Again, it's just a hobby project, but I like thinking about the "what-if" scenarios :)
How are configuration schemes handled in microservices architecture? Does this seem like a reasonable approach? What do the major players like Facebook, Google, LinkedIn, AWS, etc... do?
Instead of building a custom configuration management solution, I would use one of these existing ones:
Spring Cloud Config
Spring Cloud Config is a config server written in Java offering an HTTP API to retrieve the configuration parameters of applications. Obviously, it ships with a Java client and a nice Spring integration, but as the server is just a HTTP API, you may use it with any language you like. The config server also features symmetric / asymmetric encryption of configuration values.
Configuration Source: The externalized configuration is stored in a GIT repository which must be made accessible to the Spring Cloud Config server. The properties in that repository are then accessible through the HTTP API, so you can even consider implementing an update process for configuration properties.
Server location: Ideally, you make your config server accessible through a domain (e.g. config.myapp.io), so you can implement load-balancing and fail-over scenarios as needed. Also, all you need to provide to all your services then is just that exact location (and some authentication / decryption info).
Getting started: You may have a look at this getting started guide for centralized configuration on the Spring docs or read through this Quick Intro to Spring Cloud Config.
Netflix Archaius
Netflix Archaius is part of the Netflix OSS stack and "is a Java library that provides APIs to access and utilize properties that can change dynamically at runtime".
While limited to Java (which does not quite match the context you have asked), the library is capable of using a database as source for the configuration properties.
confd
confd keeps local configuration files up-to-date using data stored in external sources (etcd, consul, dynamodb, redis, vault, ...). After configuration changes, confd restarts the application so that it can pick up the updated configuration file.
In the context of your question, this might be worthwhile to try as confd makes no assumption about the application and requires no special client code. Most languages and frameworks support file-based configuration so confd should be fairly easy to add on top of existing microservices that currently use env variables and did not anticipate decentralized configuration management.
I don't have a good solution for you, but I can point out some issues for you to consider.
First, your applications will presumably need some bootstrap configuration that enables them to locate and connect to the configuration service. For example, you mentioned defining the configuration service API with IDL for a middleware system that supports remote procedure calls. I assume you mean something like CORBA IDL. This means your bootstrap configuration will not be just the endpoint to connect to (specified perhaps as a stringified IOR or a path/in/naming/service), but also a configuration file for the CORBA product you are using. You can't download that CORBA product's configuration file from the configuration service, because that would be a chicken-and-egg situation. So, instead, you end up with having to manually maintain a separate copy of the CORBA product's configuration file for each application instance.
Second, your pseudo-code example suggests that you will use a single RPC invocation to retrieve all the configuration for an application in a single go. This coarse level of granularity is good. If, instead, an application used a separate RPC call to retrieve each name=value pair, then you could suffer major scalability problems. To illustrate, let's assume an application has 100 name=value pairs in its configuration, so it needs to make 100 RPC calls to retrieve its configuration data. I can foresee the following scalability problems:
Each RPC might take, say, 1 millisecond round-trip time if the application and the configuration server are on the same local area network, so your application's start-up time is 1 millisecond for each of 100 RPC calls = 100 milliseconds = 0.1 second. That might seem acceptable. But if you now deploy another application instance on another continent with, say, a 50 millisecond round-trip latency, then the start-up time for that new application instance will be 100 RPC calls at 50 milliseconds latency per call = 5 seconds. Ouch!
The need to make only 100 RPC calls to retrieve configuration data assumes that the application will retrieve each name=value pair once and cache that information in, say, an instance variable of an object, and then later on access the name=value pair via that local cache. However, sooner or later somebody will call x = cfg.lookup("variable-name") from inside a for-loop, and this means the application will be making a RPC every time around the loop. Obviously, this will slow down that application instance, but if you end up with dozens or hundreds of application instances doing that, then your configuration service will be swamped with hundreds or thousands of requests per second, and it will become a centralised performance bottleneck.
You might start off writing long-lived applications that do 100 RPCs at start-up to retrieve configuration data, and then run for hours or days before terminating. Let's assume those applications are CORBA servers that other applications can communicate with via RPC. Sooner or later you might decide to write some command-line utilities to do things like: "ping" an application instance to see if it is running; "query" an application instance to get some status details; ask an application instance to gracefully terminate; and so on. Each of those command-line utilities is short-lived; when they start-up, they use RPCs to obtain their configuration data, then do the "real" work by making a single RPC to a server process to ping/query/kill it, and then they terminate. Now somebody will write a UNIX shell script that calls those ping and query commands once per second for each of your dozens or hundreds of application instances. This seemingly innocuous shell script will be responsible for creating dozens or hundreds of short-lived processes per second, and of those short-lived processes will make numerous RPC calls to the centralised configuration server to retrieve name=value pairs one at a time. That sort of shell script can pu a massive load on your centralised configuration server.
I am not trying to discourage you from designing a centralised configuration server. The above points are just warning about scalability issues you need to consider. Your plan for an application to retrieve all its configuration data via one coarse-granularity RPC call will certainly help you to avoid the kinds of scalability problems I mentioned above.
To provide some food for thought, you might want to consider a different approach. You could store each application's configuration files on a web sever. A shell start script "wrapper" for an application can do the following:
Use wget or curl to download "template" configuration files from the web server and store the files on the local file system. A "template" configuration file is a normal configuration file but with some placeholders for values. A placeholder might look like ${host_name}.
Also use wget or curl to download a file containing search-and-replace pairs, such as ${host_name}=host42.pizza.com.
Perform a global search-and-replace of those search-and-replace terms on all the downloaded template configuration files to produce the configuration files that are ready to use. You might use UNIX shell tools like sed or a scripting language to perform this global search-and-replace. Alternatively, you could use a templating engine like Apache Velocity.
Execute the actual application, using a command-line argument to specify the path/to/downloaded/config/files.
So I am using Mantl.io for our environment. Things are going very well and we are now past the POC phase and starting to think about how we are going to handle continuous delivery. Obviously automation is key. Maybe my approach or thinking is wrong but I am trying to figure out a way to manage the json I will pass to marathon to deploy the docker containers from our registry via a jenkins job call. We have various environments (testing, perf, prod, etc) and in each of these environments I will have my 30+ microservices needing different variables set for cpu, memory, environment variables, etc.
So I am just not sure the best approach for taking my docker containers and linking them with what could be maybe 10 or more different configurations per microservice depending on the environment.
Are there tools for building, managing, versioning, linking containers to configs to environments? I just can't seem to find anything in this realm and that leads me to believe I am headed down the wrong path.
Thanks
Have Kubernetes computation cluster running on GCE, reasonable happy so far. I know if I created K-cluster, I'll get to see nodes as VM Instances and cluster as Instance group. I would like to do other way around - create instances/group and make K-cluster out of it so it could be managed by Kubernetes. Reason I want to do so is to try and make nodes preemptible, which might better fit my workload.
So question - Kubernetes cluster with preemptible nodes how-to. I could do either one or another now, but not together
There is a patch out for review at the moment (#12384) that makes a configuration option to mark the nodes in the instance group as preemptible. If you are willing to build from head, this should be available as a configuration option in the next couple of days. In the meantime, you can see from the patch how easy it is to modify the GCE startup scripts to make your VMs preemptible.
I'm new to databases and web servers and that kind of thing. So I am looking for information so I can begin to figure out a starting point and options open to me.
I need to have a database that can be accessed by an iPhone app. So logically it will be hosted on a webserver somewhere.
To get/insert the data from/into the database the app would make a HTTP connection to a php file on the same server as the DB which would then insert/return the relevant data. To stop random hackers messing with the DB the app would have some validation code inside it to send to the php file to check that its not a hacker trying to mess with the database. This all making sense or will that not be secure enough.
Now the most confusing part to get my head around is :
I need check every minute has any data in the database become to old and remove it if so. So something needs to be running on the server constantly checking/manageing the database. What would this be? What is commonly used to do this kinda of thing? Is there somekey word for it that i can start searching and reading about to see what options there are?
Thanks for your advise,
-Code
One way to do this is to have a purge script run via crontab. The script can run every minute and check for old data and remove it.
MySQL version greater than 5.1.6 has inbuilt event scheduler which can be used to schedule periodic jobs inside mysql server itself.
http://dev.mysql.com/doc/refman/5.1/en/events.html
Sounds to me like you need a cron job. Cron is the standard scheduling task application for Unix type systems.
You would have some sort of script that connects to the database and performs a cleanup query, and you would schedule that script via cron.
http://en.wikipedia.org/wiki/Cron