If my application that run in oracle bare metal compute nodes access oracle bare metal object storage service using the http apis provided, would it incur bandwidth cost? In other words, is there a way the compute nodes can route the traffic through an internal network rather than going through public IPs of the bare metal object storage service.
There is no charge for network traffic between OBMCS compute and the object store.
Related
I had my app in the app engine(Flex). But it's costing a lot with no traffic yet!
I decided to move that to Kubernetes Engine which utilizes the compute engine.
Another reason I moved to Kubernetes because I wanted to run docker container services like Memcached that come with an added cost in App Engine Flex.
If you are tempted to ask why am not using App Engine Standard which is economical, that's because I couldn't find any easy way if at all there's any for running services like GDAL & Memcached.
I thought Kubernetes should be a cheaper option, but what I am seeing is the opposite.
I have even had to change the machine type to g1-small from N1...
Am I missing something?
Any ideas on how to reduce cost in Kubernetes / compute engine instances?
Please have a look at the documentation GKE Pricing and App Engine Pricing:
GKE clusters accrue a management fee of $0.10 per cluster per hour,
irrespective of cluster size or topology. One zonal (single-zone or
multi-zonal) cluster per billing account is free.
GKE uses Compute Engine instances for worker nodes in the cluster. You
are billed for each of those instances according to Compute Engine's
pricing, until the nodes are deleted. Compute Engine resources are
billed on a per-second basis with a one-minute minimum usage cost.
and
Apps running in the flexible environment are deployed to virtual
machine types that you specify. These virtual machine resources are
billed on a per-second basis with a 1 minute minimum usage cost.
Billing for the memory resource includes the memory your app uses plus
the memory that the runtime itself needs to run your app. This means
your memory usage and costs can be higher than the maximum memory you
request for your app.
So, both GAE Flex and GKE cluster are "billed on a per-second basis with a 1 minute minimum usage cost".
To estimate usage cost in advance you can use Google Cloud Pricing Calculator, also you can use it to estimate how changing parameters of your cluster can help you to reduce cost and which solution is more cost effective.
In addition, please have a look at the documentation Best practices for running cost-optimized Kubernetes applications on GKE.
In a Cloud Function I need to retrieve a bunch of key-value pairs to process. Right now I'm storing them as json-file in Cloud Storage.
Is there any better way?
Env-variables don't suite as (a) there are too many kv pairs, (b) the same gcf may need different sets of kv depending on the incoming params, (c) those kv could be changed over time.
BigQuery seems to be an overkill, also given that some kv have few levels of nesting.
Thanks!
You can use Memorystore, but it's not persistent see the FAQ.
Cloud Memorystore for Redis provides a fully managed in-memory data
store service built on scalable, secure, and highly available
infrastructure managed by Google. Use Cloud Memorystore to build
application caches that provides sub-millisecond data access. Cloud
Memorystore is compatible with the Redis protocol, allowing easy
migration with zero code changes.
Serverless VPC Access enables you to connect from the Cloud Functions environment directly to your Memorystore instances.
Note: Some resources, such as Memorystore instances, require connections to come from the same region as the resource.
Update
For persisted storage you could use Firestore.
See a tutorial about using Use Cloud Firestore with Cloud Functions
When user enter their API token on the browser, I need securely save their token. What is good approach to encrypt user's token?
I wax considering using AWS Secrets Manager to store User's token through API but it turn out it is really expensive. $0.40 per secret per month.
I might consider encrypting user token in the MySQL and store master secret in the .env file
Is there alternative approach?
Since you're already using AWS services it makes sense to take advantage of more resilient cloud-native solutions.
With SSM you only pay for your underlying AWS resources managed or created by AWS Systems Manager, however, parameter store as well as the majority of other AWS services using KMS for decryption and encryption purposes.
Additional alternatives:
Cache SSM params for instance https://github.com/alexcasalboni/ssm-cache-python
Use credstash (dynamodb + kms)
Use s3 with server and client side encryption https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html
Based on your usage you will need to do the math, KMS is not free, but has a decent monthly free tier
For your scenario, you can save the token AWS DynamoDB, encrypted at rest with AWS KMS.
In addition to the Lambda charges, AWS KMS will mainly cost $1 for CMK, and based on the on-demand encryption and decryption operations about $0.03 per 10,000 requests and the free tier will give 20,000 requests/month free of charge.
For more details about pricing, refer AWS KMS pricing section.
Currently my boss ask my team to relocate our database to cloud server(Windows). Beside that, he also asked us to attach SAN/NAS storage to that server for a better speed/performance. The problem is we have no experiences in SAN/NAS storage.
The question is, can SAN/NAS storage be attach to cloud server? If can, is this a good practice? We currently using MySQL for our database.
Thanks
are we talking about a private or public cloud (AWS, Azure) ? though there are storage arrays that are able to proxy cloud storage, I don't think there are product to attach onsite storage array to a server in public cloud.
The reason why you want to use ie SAN is for performance - minimum latency. Imagine the connection between a storage array in a separate datacenter to a cloud server over TCP/IP, possibly far appart. The latency would make it unusable for ie. high transaction workload and defeat the purpose of a storage array.
If you were talking about a private cloud - VMware orchestrated or Openstack, then that might be possible via RDM (VMware) or Cinder (probably Cinder storage node). I think Azure is adding a feature where you can integrate part of the local infrastructure to Azure as an availability zone, so there might be possibilities.
I have the following scenario where a company has two regions on Amazon cloud, Region 1 in US and Region 2 in Asia. In the current architecture AWS DynamoDB and MySQL-RDS solution are used and installed in the US region. The EC2 servers in Asia regions which hold the business logic has to access DynamoDB and RDS in the US region to get or update data.
The company wants to now install DynamoDB and MySql-RDS in the Asia region to get better performance, so the EC2 servers in Asia region can get the required data from the same region.
The main issue now is how can we sync the data between the two regions, the current DynamoDB and RDS don't inherently support multiple regions.
Are there any best practices in such a case?
This is a big problem when the access is from different geographies.
RDS off late has some support for cross-region "read" replicas. Take a look here. http://aws.amazon.com/about-aws/whats-new/2013/11/26/announcing-point-and-click-database-replication-across-aws-regions-for-amazon-rds-for-mysql/
Dynamo DB doesn't have this. You might have to think of partitioning your data (keep Asia data in Asia and US data in US). Another possibility is to increase the speed by using an in-memory cache. Don't access Dynamo DB always for all the reads: After every successful read, cache the object in AWS Elasticache - setup this cache to be near the required regions (you will need multiple cache clusters). Then all the reads will be fast (since they are now region local). When the data changes (write) then invalidate the object in the cache as well.
However this methods only speeds up the reads (but not writes). Typically most apps will be OK with this.