I need to fetch all the compute metrics (CPUUtilization, MemoryUtilization, ....) in one rest api call. Currently, in documentation we can fetch only one metric related values at a time. Is there a way to achieve this?
Note: I need data for the last 10 minutes timestamp.
Each SummarizeMetricsData API call can only get data about one metric. You would need to make multiple calls to get data about all the different metrics associated with an instance.
Related
I am using Google Cloud Datastore, to store some data. When trying to extract all entities from my "Kind", through the API into GAS code, I realized that the API extracts 300 entities each time. To extract the totality of entities, I used the "cursor" option to fetch the next batch where the previous stopped.
Is there any way to extract all entities (or at least more than 300) at once?
While trying to find an answer in the web, I did not find any specific answer.
The max number of entities you can update/upsert in one go via Datastore's Data API is 500 but if you use a lookup operation you could potentially fetch 1000 entities (as long as they are collectively under 10MB for the transaction) as listed under the "Limits" section of Datastore's reference documentation.
However, you might be able to leverage the export/import endpoints of Datastore's Admin API to export/import data in bulk. Check out the guide for more information.
I have a .csv file with 400 million lines of data. I was wondering if I were to convert it into an data API which returns into JSON format, will there be any limitations if consumers were to call and GET the data API. Would it show the full content of the data API and would it take long for the API to produce an output when being called.
If you convert this as GET API call then you might run into following issues:
You might run into max size limit issue i.e maximum size of data you can transfer over a GET request although it will depend on you server and clients device you can refer this answer of details
Latency will depend on the physical location of your server and the clients, you can potentially reduce this by cache your information if your data is not changing frequently.
Hope that helps
I'll describe the application I'm trying to build and the technology stack I'm thinking at the moment to know your opinion.
Users should be able to work in a list of task. These tasks are coming from an API with all the information about it: id, image urls, description, etc. The API is only available in one datacenter and in order to avoid the delay, for example in China, the tasks are stored in a queue.
So you'll have different queues depending of your country and once that you finish with your task it will be send to another queue which will write this information later on in the original datacenter
The list of task is quite huge that's why there is an API call to get the tasks(~10k rows), store it in a queue and users can work on them depending on the queue the country they are.
For this system, where you can have around 100 queues, I was thinking on redis to manage the list of tasks request(ex: get me 5k rows for China queue, write 500 rows in the write queue, etc).
The API response are coming as a list of json objects. These 10k rows for example need to be stored somewhere. Due to you need to be able to filter in this queue, MySQL isn't an option at least that I store every field of the json object as a new row. First think is a NoSQL DB but I wasn't too happy with MongoDB in the past and an API response doesn't change too much. Like I need relation tables too for other thing, I was thinking on PostgreSQL. It's a relation database and you have the ability to store json and filter by them.
What do you think? Ask me is something isn't clear
You can use HStore extension from PostgreSQL to store JSON, or dynamic columns from MariaDB (MySQL clone).
If you can move your persistence stack to java, then many interesting options are available: mapdb (but it requires memory and its api is changing rapidly), persistit, or mvstore (the engine behind H2).
All these would allow to store json with decent performances. I suggest you use a full text search engine like lucene to avoid searching json content in a slow way.
I am offering a Restful API to clients that access it by webservice. Now I would like to be able to count the API call per month.
What would be the best way to count the calls? Incrmenting a DB field would mean on DB call more. Is there a workaround? We are talking about millions of API calls per month.
You can also log to text file and use log analytics tools such as Webalizer(http://www.webalizer.org/) to analyze the text files
HTH
You can use a separate in-memory database to track these values, and write them to disk occasionally. Or store calls in a collection and batch-write them to the database occasionally.
I'm trying to migrate a lot of buckets from one production server to another. I'm currently using a script that queries a view and copies the results to the other server. Hovewever, I don't know how can this process be broken down in smaller steps. Specifically, I'm looking to copy all available buckets to the other server(this takes several hours), run some tests, and when the tests are successful, if there are new buckets, use the same script to only migrate the new ones.
Does couchbase support, for its views, any feature that might help ? Like LIMIT and OFFSET for the query or maybe a last modified date on each bucket so I can filter by that?
You should really consider using Backup
and restore Restore
To answer your question, yes. If you are using an SDK than you need to look into their API but for instance using the console you can check all the filter options available to you. As an example if you use HTTP you have &limit=10&skip=0 as arguments. Check more info here
To filter by modified date you need to create a view specifically for that, which would have the modified date as key in order to be searchable.
Here is a link that shows you how to search by date which implies as I mentioned, creating a map / reduce function with the date as the key and then querying that key: Date and Time Selection