Difference between fetch and latency in the metrics view - apiconnect-test-monitor

I've been using IBM's API Connect Test and Monitor and it has been really amazing. I just needed to know what is really meant by fetch vs latency in the metrics view under the dashboard tab as seen in the image below.

Related

Autodesk forge viewer: Is there a way to increase the timeout of viewer to load multiple models/large models

I am using forge viewer 7 version to load the models, sometimes when it takes more time to load them, either due to slow network or due to large models, the viewer looks like getting Timeout and says 'Disconnected'. Is there a way to increase this timeout?
I tried to find the viewer timeouts in documentation but couldn't find.
Error I am getting is "Disconnected!"
Afraid the connection was timed out on a networking level (browser/server shutting the connection down) so there's very limited control over that for Viewer or Javascript.
However you can set up a proxy service (see an example here) in your backend to relay the requests to and from Forge endpoints - this way you'd have control over the timeout settings in your backend and have the opportunity to optimize network traffic (backend server locality, SD-WAN etc comes to mind) or even cache the request contents to boost performance.
And when you said sometimes I suppose the model can be loaded properly when the networking conditions are better - if that's the case you can also consider caching the requests in the browser - see here for details.

Will GoogleBot's indexing cause CloudSQL to be expensive on a low traffic website (Afraid of Google's CloudSQL pricing)

here is the issue:
I used the CloudSQL price calculator to estimate the price of running a website, my website has 1000-2000 URLs and each URL will use the DB in some way, I don't have more than 1GB of data, and I mostly deal with reads, for a small 50k record table, nothing super-complicated, I don't currently have very complex queries either, and I write into the db only once a week maybe a couple records here and there, I've even considered SQLITE tbh.
I don't currently have a lot of traffic, maybe people come visit once a day, however, GoogleBot will continuously try to index the website via the sitemap, which causes some times lots of requests on the server.
Currently, I have a normal php+mysql website which does the job on a DigitalOcean instance, which doesn't take a lot of resources, however, I want to move to Cloud Run in order to try the Cloud Run technology, but running MySQL directly on the VM is discouraged (as per this question Should I run mysql on google cloud run? (or any database))
So I'm kind of afraid of using CloudSQL and then having GoogleBot destroying my credit card by doing lots of concurrent requests into the CloudSQL Database during daily indexation.
Traffic doesn't scare me (I don't have any), but crawlers do.
Should I use CloudSQL for this usecase?
Will my credit card be destroyed?
Are these valid concerns?
Any opinion from experienced CloudSQL Users would be greatly appreciated.
If you consider fully managed database instance Google Cloud is definitely good choice for you.
If you want to optimize GoogleBot crawling, you can do it from here
However, if you experience high server load from specific sites/services you may consider blocking them or using Google Cloud CDN caching
Please read this article will explain how to deal with heavy bot load on the website
Your concerns do not sound valid to me, since you can limit GoogleBot crawling rate.
Since Cloud Run is compute platform STATELESS container service, it is not suited to install MySQL. If you are searching to install your own MySQL server and manage it, you can do it on Cloud Compute Engine using one click solution from Marketplace

Business Intelligence: Live reports from MySQL

I wanted to create a (nearly) live dashboard from MySQL databases I tried PowerBI, SSRS and other similar tools but they were not as fast as I wanted. What I have in mind is the data to be updated every 1 minute or even less. Is it possible? and are there any free (or inexpensive) tools for this?
Edit: I want to build a wallboard to show some data on a big TV screen. I need it to be real-time. I tried SSRS autorefresh as well but it has a loading sign and very slow, plus PowerBI uses Azure which is very complex to configure and blocked for my country.
This is a topic which has many more layers than to ask which tool is best for this case.
You have to consider
Velocity
Veracity
Variety
Kind
Use Case
of the data. Sure, this is usually only being recounted if talking about Big Data, but will give you a feeling about the size and complexity of data.
Loading
Is the data being loaded and you "just" use it? Or do you also need to load it realtime or near-realtime (for clarification read this answer here)?
Polling/Pushing
Do you want to poll data every x seconds or minutes? Or do you want to work event based? What are the requirements which will need you to show data this fast?
Use case
Do you want to show financial data? Do you need to show data about error and system logs of servers and applications? Do you want to generate insights as soon as a visitor of a webpage is making a request?
Conclusion
When thinking about those questions, keep in mind this should just be a hint to go into one direction or another. Depending on the data and the use case, you might use an ELK stack (for logs), Power BI (for financial data) or even some scripts (for billing).

Can I download historical CPU usage data for Google Compute Engine?

Anyone know if there is a way to download historical cpu usage data via API call for Google Compute Engine?
On the Console overview page, graphs are provided for this type of data for at least a month, but I don't see anything obvious on how to download the actual data directly.
The Google Compute Engine usage export feature recently launched - it sounds like what you're looking for. It gives daily detail in a CSV, as well as a month-to-date summary.

Measuring scalability of the web app hosted on cloud

I have developed a social networking site using the elgg framework and I am hosting it on amazon cloud (Amazon EC2, the free tier micro instance service) and thus develop a benchmark for it.I am creating around 200 columns describing each user (most of them dummy) and after that I should create around a million users with each users profile updated with some data.This is done to reflect the image of big data.When hosted on cloud we should measure the cloud's performance based on a query and an update action for all users. The problem is how to create so many users? Which tool would be optimal to choose? Done this, I should also consider storage on a file system(HDFS) and do the same with some modifications (The output should be a row and the input should be an unstructured data).
For elgg framework we are using mysql as backend. I have no idea how to start with it. Any suggestions would be really helpful.
Thank you.
I had to perform a similar task recently and came up with this script.. maybe it can help you