In the zabbix project, how to export historical data about CPU and memory in the past four hours from mariaDB? - zabbix

I can capture the real-time records of CPU and memory in the web interface of zabbix. The history table in the mariaDB database has data. How should I export the historical data of the past four hours?
enter image description here

Historical data is a history-related table that exists, and to get historical data from the CPU or memory, you need to get its itemid first.Find out the host's host from the host table, check its monitoring items according to the hostid, find out the data object you want to get, and query from the history table.
select itemid,from_unixtime(clock) as time,value from history where itemid=29096 and clock >= unix_timestamp('2021/05/21 00:00:00') and clock <= unix_timestamp('2021/05/21 17:31:00');

Related

tell mysql to store table in memory or on disk

I have a rather large (10M+) table with quite a lot data coming from IOT devices.
People only access the last 7 days of data but have access on demand to older data.
as this table is growing at a very fast pace (100k/day) I choose to split this table in 2 tables, one only holding the 7 last days of data, and another one with older data.
I have a cron running that basically takes the oldest data and moves it to the other table..
How could I tell Mysql to only keep the '7days' table loaded in memory to speed up read access and keep the 'archives' table on disk (ssd) for less frequent access ?
Is this something implemented in Mysql (Aurora) ?
I couldn't find anything in the docs besides in memory tables, but these are not what I'm after.

Spark memory requirements for querying 20gb

Before dive into the actual coding I am trying to understand the logistics around Spark.
I have server logs split in 10 csv's round 2GB each.
I am looking for a way to extract some data e.g. how many failures occured in a period of 30 minutes per server.
(the logs have entries from multiple servers aka there is no any predefined order in time and per server)
Is that something I could do with spark?
If yes would that mean I need a box with 20+ GB of RAM?
When I operate in Spark with RDDs,does take into account the full dataset?E.g. an operation of ordering by timestamp and server id would execute to the full 20GB dataset?
Thanks!

Is amazon RDS charge increase on database volume

Suppose I have a database in amazon RDS with 100 data available. Now I made a query to fetch a single data from these 100 data. After few days this 100 data become 100k data and now I made same query again to fetch a single data. Now here is my question-> is my query cost will be same for both? Or second one will be higher than first one?
RDS pricing is based on the cost of the hourly RDS instance class, plus the cost of storage and backups, and for some database types (Aurora) there is an I/O rate cost. Other than the I/O rate, the pricing is pretty straight-forward and should answer your question.

redusing processing time of Database

We have two Database i.e. DB-A and DB-B, we have almost more than 5000 tables in DB-A, we daily process our whole Database. Here Processing means we get the data from multiple tables of DB-A and then insert these data into some of the tables of DB-B, now after inserting these data into DB-B, we access these data many times because we need to process the whole data of DB-B. we access these data of DB-B whenever we need to process it, and we need to process it more than 500 time in a day and every time we access only that data which we need to process. Now since we are accessing these database(DB-B) multiple time it requires more than 2 hr time to get processed.
Now the problem is that i want to access the data from DB-A and then wants to process this data and then wants to insert this data into DB-B in one shot. but the constraint is that we have limited resources that is we have only 16 GB ram and we are not in the position to increase ram.
we have done indexing n all but still it is taking almost more than 2 hr time. please suggest me how can i reduce the processing time of this data ?

Hibernate tuning for high rate of inserts and selects per second

We have a data acquisition application with two primary modules interfacing with DB (via Hibernate) - one for writing the collected data into DB and one for reading/presenting the collected data from DB.
Average rate of inserts is 150-200 per second, average rate of selects is 50-80 per second.
Performance requirements for both writing/reading scenarios can be defined like this:
Writing into DB - no specific timings or performance requirements here, DB should be operating normally with 150-200 inserts per second
Reading from DB - newly collected data should be available to the user within 3-5 seconds timeframe after getting into DB
Please advice on the best approach for tuning the caching/buffering/operating policies of Hibernate for optimally supporting this scenario.
BTW - MySQL with InnoDB engine is being used underneath Hibernate.
Thanks.
P.S.: By saying "150-200 inserts per second" I mean an average rate of incoming data packets, not the actual amount of records being inserted into DB. But in any case - we should target here a very high rate of inserts per second.
I would read this chapter of the hibernate docs first.
And then consider the following
Inserting
Batch the inserts and do a few hundred per transaction. You say you can tolerate a delay of 3-5 seconds so this should be fine.
Selecting
Querying may already be ok at 50-80/second provided the queries are very simple
Index your data appropriately for common access patterns
You could try a second level cache in hibernate. See this chapter. Not done this myself so can't comment further.