Google Cloud persistent disk pricing not prorated like Amazon EBS? - google-compute-engine

Is GCP persistent disk pricing prorated for short term use like retrieving data from a backup snapshot?
Pricing for GCP balanced storage is $0.1 per GB per month.
If I create a 100GB disk for 1 day only in a 30 days month. Will I be billed for:
(1) 0.1 * 100 = $10?
Or
(2) 0.1 * 100 / 30 days = $0.33?
With Amazon EBS it seems to be (2) according to this page. But according to GCE pricing, although not clear, Google seems more like (1).

Yes, the disk pricing is always prorated. However, you can use the pricing calculator to get a good estimate of any product pricing.

Related

Ethereum Genesis file private network

Need some basic info on ethereum.
Can I send 1 million transaction in a day in ethereum private network?If yes, how much gas will be required(approx)?
How much of maximum gas limit can we define for a node?
And I have a doubt that if I reinitialize the genesis file then, whether a new blockchain is started or it continues with the older one?
Can I send 1 million transaction in a day in ethereum private network?
Yes, that's around 12 transactions per second, that's no problem.
1000000 / (24 * 60 * 60) = 11.574
If yes, how much gas will be required(approx)?
A transaction without anything else but value transfer costs 21,000 gas.
That is 21 billion gas per 1 million transactions per day, or (assuming a 15 seconds block time) 3.6 million gas per block:
21000000000 / (24 * 60 * 4) = 3645833.333
The default gas limit on Ethereum public network is 4712388 (1.5 * pi million). But it's trivial to increase the target gas limit.
How much of maximum gas limit can we define for a node?
In theory, you should be able to set the gas limit as high as you wish, however, that's not practicable, as discussed in EIP-106 which suggest limiting the maximum block gas limit to 2^63 - 1.
And I have a doubt that if I reinitialize the genesis file then, whether a new blockchain is started or it continues with the older one?
Yes, if you change the genesis, this will in most cases start a new blockchain.

Move from MySQL to AWS DynamoDB? 5m rows, 4 tables, 1k writes p/s

Considering moving my MySQL architecture to AWS DynamoDB. My application has a requirement of 1,000 r/w requests per second. How does this play with PHP/updates? Having 1,000 workers process DynamoDB r/w's seems like it will have a higher toll on CPU/Memory than MySQL r/w's.
I have thought about a log file to store the updated information, then create scripts to process the log files to remove db load - however stunted by file locking, would be curious if anyone had any ideas on implementing this - 300 separate script's would be writing to a single log file. The log file could then be processed every minute to the db. Not sure how this could be implemented without locking. Server script is written in PHP.
Current Setup
MYSQL Database (RDS on AWS)
table A has 5m records- the main db table, 30 columns mostly numerical + text <500 chars. (Growth +30k records per day). Has relationships with 4 other tables containing;
table b - 15m records (Growth +90k records per day).
table c - 2m records (Growth +10k records per day).
table d - 4m records (Growth +15k records per day).
table c - 1m records (Growth +5k records per day).
Table A updates around 1,000 records per second then updated / added rows are queued for adding to SOLR search.
Would appreciate some much needed advice to lower costs. Are there hidden costs or other solutions I should be aware of before starting development?
I afraid the scope for performance improvement for your DB just too broad.
IOPS. Some devops choose provision 200GB storage (200 x 3 = 600 IOPS)
than the "provisioned IOPS" for smaller storage (say they only need
50GB then purchase provisioned IOS). You need to launch an excel
sheet to find the pricing/performance sweet spot.
You might need to create another "denormalised table" from table A,
if frequent select from table A but not traverse the whole
text <500 chars. Don't underestimated the text workload.
Index, index , index.
if you deal with tons of non-linear search, perhaps copy part of relevant data to dynamodb that you think will improve the performance, test it first, but maintain the RDBMS structure.
there is no one size fit all solutions. Please also inspect usage of Message queue if required.
Adding 200k records/days actually not much for today RDBMS. Even 1000 IOPS are only happen in burst. If query is the heaviest part, then you need to optimize that part.

Load large amount of data (10000 signals - time series)

I am analizing how to store over 10000 signals 50 times per second. Probably I will read them from memory. Each signal has a timestamp (8 bytes) and a double (8 bytes). This process will be running 4 hours 1 day a week. Then:
10000 x 50 x 16 = 8 MBS / seconds.
8 x 3600 x 4 = 115 GBS / week.
What database (or other option like files) should I use to store this data quickly. Are MondoDB or Cassandra good options? What language would be good? Is Java enough fast to read data from memory and store in the database or C is a better choice?
Is needed a cluster solution?
Thanks.
Based on your description, I'd suggest Sqlite database. It's very light weight and faster than MySQL and MongoDb.
See benchmark here.
It is roughly 700~800 MB of data per single day - so if you need to query it - after one month- 25 GB will be scanned.
In this case you probably will need a clustered/sharded solution to split the load.
As data will grove constantly - you need to have a dynamic solution which can use mongoDB shards and replica sets to span load and manage data distribution.

AWS Aurora IOPS Cost

I have a table in MySql at the moment, 7.3 million rows, 1.5GB in size if I run this query:
How to get the sizes of the tables of a mysql database?
I'm trying to get a handle on what a full table scan of that in AWS Aurora would cost me?
AWS lists it as:
I/O Rate - $0.200 per 1 million requests
But how do I possible translate that into "what will this cost me"?
See also https://stackoverflow.com/a/6927400/122441
As an example, a medium sized website database might be 100 GB in size and expect to average 100 I/Os per second over the course of a month. This would translate to $10 per month in storage costs (100 GB x $0.10/month), and approximately $26 per month in request costs (~2.6 million seconds/month x 100 I/O per second * $0.10 per million I/O).

Is a mySQL relational database scalable for holding 80 GBs of additional logs per day?

I am currently deciding on a long term architecture solution for storing DNS logs. The amount of data we are talking about numbers some 80 GBs of logs per day at the peak. Currently I am looking at noSQL databases such as mongoDB, as well as relational - mySQL. I want to structure a solution that has three requirements:
Storage: This is a long term project, so I want the necessary capability to store 80 GBs of logs per day (~30 TB a year!). I realize this is pretty ridiculous, so I'm willing to have a retention period (keep 6 months' worth of logs = 15 TB constant).
Scalability: As it is a long term solution, this is a big issue. I've heard that mongoDB is horizontally scalable, while mySQL is not? Any elaboration on this would be very well received.
Query speed: As close to instantaneous querying as possible.
It should be noted that our logs are stored on an intermediary server, so we do not need to forward logs from our dns servers.