Azure API Management log optimization and cost saving - azure-api-management

i have some instance of azure api management.
I didn't find on azure doc, so i have on api management setting:
both logging option enabled with all setting inside enabled. With actual configuration, both option enabled, i'm paying double for log?
Other question is, which kind of unuseful log i can disable on apim to save cost?
Regards

In order to save costs, you may use only App Insights for logging. However, please keep in mind that one does not replace the other fully, although there is some overlap.
App Insights is a part of Azure Monitor, however, it acts as an automatic problem monitor as opposed to having a deep insight over your Azure infrastructure.
If you do not require in your day-to-day activities a deeper insight of the APIs, such as telemetry analysis for example, then perhaps Azure Monitor is not needed.
There is also the option of setting for each specific API and/or APIM instance, so if you run into performance issues on one you might want to use Azure Monitor, as opposed to only using App Insights.
Without deeper insight on the specifics of your business and infrastructure, my advice is to continue with both for the production instance, and cut back on Azure Monitor for dev and staging if you'd like.

Related

Should I run mysql on google cloud run? (or any database)

I've been researching the new options to run Docker containers in Google Cloud Run, however, there seems to be no advice on whether or not one should run MySQL on Cloud run, apparently, I know it isn't a web service, and I understand in the Official Google Documentation for GCP, Google would probably just tell people to kindly use Cloud SQL (their SQL Offering), I haven't found any advice online about "running mysql on cloud run", so I thought I'd ask here.
Will startup times from cold starts decrease performance of the solution? (assuming one uses a Bucket for storing the stuff)
Running a SQL database is not a good fit for Cloud Run.
First of all, the contract between the deployed container and Cloud Run is that the container needs to run an HTTP server on port 8080. That's not really the way MySQL works.
Second of all, the container is going to be limited to the filesystem that was included in the container image. This same image is going to be instantiated many times over as the service handles load. There will be no way to persist the data written to MySQL. You could have read-only data stored in that image that only changes when a new image is published, but that's not really what you would expect to use a relational database for.
Cloud Run is really good at operating HTTP/web services in a serverless and scalable way. These web services typically make use of other APIs and service deployed to Google Cloud, or third party services. It's not really meant to offer persistent, scalable, ACID-compliant database services - this is a whole different sort of problem space.

Are service discovery tools responsible for providing service credentials as well?

I am trying to understand some basic concepts as I am still new to the concept of service discovery and cloud programming (excuse the cliché). A question that has been in my head for some time is: are service discovery solutions like Consul, etcd & Zookeeper responsible for providing service credentials as well?
For example, if we have a web application which queries information about the location of database server(s), who is responsible for providing it with the credentials (username, password) for connecting to it? I do know that this is probably subjective but I would be glad to learn more about best practices related to that.
Indeed, see Consul and Vault. Now, for the reasoning: service registries typically don't come with a full-fledged set of ACLs, etcetera, to protect secrets, plus they gossip said secrets around the network, dump them left and right on disk - it's a security nightmare. You want to make sure that access is as limited as possible, strictly on a need-to-know basis. Therefore, use some specific tool to do that - Hardware Security Modules, Vault, Chef encrypted databags, and so on.
The tools Consul and Vault do exactly the split you propose: Consul for service discovery, Vault for sharing secrets (and implementing something like a lease and dynamic secrets).
Please read about these two tools to see if this concept works for you.

Cloud based web service for Web Applications

My web application uses PHP/MySQL on the server side to fetch and store data in a database. The DB size will increase with the user base, and can be huge. The application has been built and run on a conventional server, i.e no "cloud" specific code has been written (I have no experience with cloud systems; Is running services on them any different from running on a normal server?)
My concerns:
1. If I buy space on Amazon Elastic Compute Cloud, can I directly port all my code to the new server, or do I have to use some APIs specific to that? Since it's pay as you go, it's highly suitable for such a requirement.
2. What are the other options for hosting a web service that would require large server space? How might apps like Whatsapp be doing the same?
Thanks.
1) The answer to the first question depends on the type of service you're buying. Cloud comes in many forms, from Infrastructure as a Service (which basically offers you hardware as a service on which you can run your software stack) to Software as a Service (e.g. Gmail, which lets you use applications (or APIs) hosted in the cloud ).
The best alternative, in your case I think it is Platform as a Service (e.g Heroku) which defines a set of technologies supported by the provider and how to use them.
Either case, how difficult it is depends on your app and the specification of the service and the level of support offered, so you have to dig a little deeper (starting with guides of how to deploy a similar app would be a good choice).
2) Startups and other medium size companies use cloud providers such as Amazon, Rackspace etc and when they reach a certain size tend to build their own data centers (e.g Zynga). There's a threshold beyond which is better to manage your own infrastructure instead of buying services from others.

Hosted Database v Cloud Database

I have looked everywhere...
whats the difference between a hosted database and a cloud database they seem like the same things?
Thanks
Both "hosted database" and "cloud database" mean that the database lives on the servers of some external provider/hoster.
The hoster might even be the same in both cases.
The main difference is that the "cloud" plans are usually meant to scale more (at a higher monthly fee), so you'd use them when you expect your site to get huge soon and need to quickly adjust server capacity when needed.
On the other hand, "hosted" plans are not that expensive, but have more limitations (server speed, database size...) and are more suited for "small" websites.
Of course this isn't by any means an "official" description of the two terms, but that's the impression that I get every time I see "cloud" or "hosted" webspaces/databases/services/whatever.
It depends on the context in which they're being used, but, yes, they usually mean the same thing. When I see the term cloud database being used they are usually referencing some cloud platform like Amazon EC2 or Microsoft Azure instead of GoDaddy or HostGator or something. Plus, cloud is the new buzz word - I'm sure it sells better. Lol.
As Christian Specht said, the cloud servers really scale more. So why you need more scaling? and why there are many featured options in cloud database service selection?
Things are not like before. Before smartphones and earlier pc operating systems, users gets information from the server only when they log on the specific web page using their credentials. But now apps like facebook shows notifications, provide ads etc and collect/push other data in parallel while we are looking at something else irrelevant.
Hosted database are reliable to access the database when users log onto the web page. But in case of the lastest smart phone applications, it needs to access the database everytime starting from its birth (installation on the device). So for each installation, the minimum workload over the server is expected to raise up.
So more scalability is required here. More simultaneous connections, Input/Output operation requests are expected daily. So with the dedicated servers with the core purpose, and with the configurable package selection based on your expectation of user count and bandwidth usage, Cloud Service is not yet another marketing term, but is a helpful service.

How do you build and deploy a scalable web services infrastructure?

I have a client asking this for a requirement and haven't done this before, what does he mean by web service infrastructure?
That phrase encompasses a wide variety of technical aspects. Your infrastructure is all of the components that make up the systems that run a web business or application, including hardware. So it refers to your server and network setup, your bandwidth and connections in and out, your database setup, backup solutions, web server software, code deployment methods, and anything else used to successfully run a web business with high reliability and uptime and low error and bug incidents.
In order to make such a thing scalable, you have to architect all these components together into something that will work smoothly with growth over time. A scalable architecture should be flexible enough to handle sudden traffic spikes.
Methods used to facilitate scalability include replicated databases, clustered web servers, load balancers, RAID disk striping, and network switching. Your code has to take much of this into account.
It's a tough service to provide.
First thing that comes to mind was the Enterprise service bus.
He probably means some sort of "infrastructure" to run a lot of complex interacting web services.
Either an enterprise application that you call via a web service that can run on many instances of a web application server, or run a single instance that are very nicely multithreaded and scale to many CPUs, or deploying loads of different webservices that all talk to each other, often via message queues, until you have something that breaks all the time and requires a huge team of people to maintain. Might as well throw in a load of virtual machines to have a virtualised, scalable, re-deployable web service infrastructure (i.e., loads of tomcats or jbosses in linux VMs ready to deply as needed, one app per VM).
Then there is physical scalability. Is there enough CPU power for your needs? Is there enough bandwidth between physical nodes to send all these messages and SOAP transactions between machines? Is there enough storage? Is the storage available on a fast, low latency interconnect? Is the database nicely fed with CPU power, bandwidth, a disc system that doesn't lag. Is there a database backup. How about when a single machine can't handle the load of a particular function - then you need load balancers, although these are good for redundancy and software updates whilst remaining live as well.
Is there a site backup? Or are you scaling globally - will there be multiple data centres around the globe? Do you have redundant links to the internet from each data centre? What happens when a site goes down? How is data replicated between sites, to reduce inter-site communications, and how do these data caches and pushes work?
And so on and so forth. But your client probably just wants a web service that can be load balanced without thrashing (i.e., two or more instances can share data/sessions/etc, depends on the application really), with easy database configuration and backup. Ease of deployment is desirable, so make the install simple. Or even provide a Linux VM for them to add to their VM infrastructure. Talk to their sysadmin to see what they currently do.
This phrase is often used as a marketing term from companies who sell some part of what they'll call a "scalable web services infrastructure".
Try to find out from the client exactly what they need. Do they have existing web services? Do they have existing business logic they've decided to expose as web services? Do they have customers who are asking to be able to access your client's systems through web services?
Does your client even know what a web service is?