How much time ManageIQ takes to reflect data After adding Provider(Azure,AWS etc.) - open-source

I added Azure Provider and AWS provider to the ManageIQ its been more than a Day no data about Instances reflected in ManageIQ. As the provider is Authenticated but still no report displayed about Instances in ManageIq with Azure provider and AWS provider.

The time it takes for the initial refresh to complete depends on both the provider type and the amount of inventory in the provider. For most typically sized Azure or Amazon providers, usually it takes on the order of 10s of minutes or less. If you are not seeing anything after a day, there is very likely a problem. For more information, take a look in the evm.log, aws.log, and/or azure.log found in /var/www/miq/vmdb/log.

Related

How to lower costs of having MySQL db in Google Cloud

I set up Google Cloud MySQL, I store there just one user (email, password, address) and I'm querying it quite often due to testing purposes of my website. I set up minimal zone availability, the lowest SSD storage, memory 3.75GB, 1vCPUs, automatic backups disabled but running that database from the last 6 days costing me £15... How can I decrease the costs of having MySQL database in the cloud? I'm pretty sure paying that amount is way too much. Where is my mistake?
I suggest using the Google Pricing Calculator to check the different configurations and pricing you could have for a MySQL database in Cloud SQL.
Choosing Instance type
As you've said in your question, you're currently using the lowest standard instance, which is based on CPU and memory pricing.
As you're currently using your database for testing purposes, I could suggest to configure your database with the lowest Shared-Core Machine Type which is db-f1-micro, as shown here. But note that
The db-f1-micro and db-g1-small machine types are not included in the Cloud SQL SLA. These machine types are designed to provide low-cost test and development instances only. Do not use them for production instances.
Choosing Storage type
As you have selected the lowest allowed disk space, you could lower cost changing the storage type to HDD instead of a SSD if you haven't done so, as stated in the documentation:
Choosing SSD, the default value, provides your instance with SSD storage. SSDs provide lower latency and higher data throughput. If you do not need high-performance access to your data, for example for long-term storage or rarely accessed data, you can reduce your costs by choosing HDD.
Note that Storage type could only be selected when you're creating the instance and could not be changed later, as stated in the message when creating your instance.
Choice is permanent. Storage type affects performance.
Stop instance when is not in use
Finally, you could lower costs by stopping the database instance when it is not in use as pointed in the documentation.
Stopping an instance suspends instance charges. The instance data is unaffected, and charges for storage and IP addresses continue to apply.
Using Google Pricing Calculator
The following information is presented as a calculation exercise based in the Google Pricing Calculator
The estimated fees provided by Google Cloud Pricing Calculator are for discussion purposes only and are not binding on either you or Google. Your actual fees may be higher or lower than the estimate. A more detailed and specific list of fees will be provided at time of sign up
Following the suggestions above, you could get a monthly estimate of 6.41 GBP. Based on a 24 hour per 7 days running instance.
And using a SSD, it increases to 7.01 GBP. As said before, the only way to change the storage type would be to create a new instance and load your data.
And this could lower to 2.04 GBP if you only run it for 8 hours 5 days a week running on HDD.

Ec2 instance type for an api with complex database and data calculation

I need advise from any one who uses aws ec2 instance to host their projects.
Currently I have a php project(for backend api) and a reactjs (for frontend). When testing locally, the api response time is 3 seconds(I still optimizing my backend code to reduce it to 2 seconds), but my main concern is when deployed to a staging machine in aws using t3.medium for the backend and t2.medium for frontend, the response time is at least 19 seconds. Here are my goals
1. For staging, at least 5 seconds response time, since this is mainly used for testing purposes.
2. For production, I want the response time same as my local machine.My local machine uses i-7 and 16 gig of ram(with of course too many other application running and lots of tabs open (google chrome) locally).The initial target users for the production is 10-15 users, but will grow once our app will be tested well and stable(I mean data should be accurate).
At first my plan is to test all the available ec2 instance types and see which of them suits in my requirements particularly the response time, but a friend told me that it will cost me a lot since every time an ec2 instance is provisioned, aws will charge for the resources used.Also, what are the best approach, since my backend api has lot of scripts that is being run .The scripts is actually calling the amazon selling partner api and advertising api, which is currently, a very slow api itself, some of their endpoints has a response time of at least 30 seconds, that is why I decided to run them in backeground tru cron jobs.THis scripts also perform database writes after the response from amazon api is successful.
Thank you

Are GCP CloudSQL instances billed by usage?

I'm starting a project where a CloudSQL instance would be a great fit however I've noticed they are twice the price for the same specification VM on GCP.
I've been told by several devops guys I work with that they are billed by usage only. Which would be perfect for me. However on their pricing page it states "Instance pricing for MySQL is charged for every second that the instance is running".
https://cloud.google.com/sql/pricing#2nd-gen-pricing
I also see several people around the web saying they are usage only.
Cloud SQL or VM Instance to host MySQL Database
Am I interpreting Googles pricing pages incorrectly?
Am I going to be billed for the instance being on or for its usage?
Billed by usage
All depend what you mean by USAGE. When you run a Cloud SQL instance, it's like a server (compute engine). Until you stop it, you will pay for it. It's not a pay-per-request pricing, as you can have with BigQuery.
With Cloud SQL, you will also pay the storage that you use. And the storage can grow automatically according with the usage. Be careful the storage can't be reduce!! even if you delete data in database!
Price is twice a similar Compute engine
True! A compute engine standard1-n1 is about $20 per month and a same config on Cloud SQL is about $45.
BUT, what about the price of the management of your own SQL instance?
You have to update/patch the OS
You have to update/patch the DB engine (MySQL or Postgres)
You have to manage the security/network access
You have to perform snapshots, ensure that the restoration works
You have to ensure the High Availability (people on call in case of server issue)
You have to tune the Database parameters
You have to watch to your storage and to increase it in case of needs
You have to set up manually your replicas
Is it worth twice the price? For me, yes. All depends of your skills and your opinion.
There are a lot of hidden configuration options that when modified can quickly halve your costs per option.
Practically speaking, GCP's SQL product only works by running 24/7, there is no time-based 'by usage' option, short of you manually stopping and restarting the compute engine.
There are a lot of tricks you can follow to lower costs, you can read many of them here: https://medium.com/#the-bumbling-developer/can-you-use-google-cloud-platform-gcp-cheaply-and-safely-86284e04b332

trying to understand gcp cloud costs and determine free or low cost relational database hosting?

I was originally planning to use Azure SQL for a client's database but Azure said that the estimated cost was going to be something around $250/month for the most basic configuration. I remember when using Azure for my own experimentation in the past, that Azure costs were higher than expected so I decided to look at GCP as an alternative.
GCP offered me a free trial credit of $300 so I accepted that by default. I created a new SQL Server instance via my GCP account, created the most basic database configuration, then connected via SSMS and created a single database table with a single Id column. That's it. Now, 2 days later with no additional usage of this database table, my GCP free trial credit has been burned down by $15. Based on this trend, a SS instance on GCP seems to cost about as much as an Azure SQL instance. Am I inferring this correctly?
Can you recommend a good quality option which provides free relational database hosting for low volume, low transaction databases? SQL Server would be great but MySQL should work too. I'm assuming that MySQL is fairly equivalent for simple databases?
I don't know about costs related to other cloud providers, but gcp's are usually really competitive on the market. With cloud SQL you pay per instance/h and you pay more/less based on different factors. Use the google cloud price calculator to have a general idea of the costs, and adjust cloud sql accordingly: https://cloud.google.com/products/calculator
Additionally, here you may find all the information regarding Pricing details of Cloud SQL.

Huge data transfer usage on RDS w/ MySQL

We started using RDS last month for our database needs but we're seeing "data transfer in" usage of about 3~6gb EACH DAY. Our database size is about 4gb. How's that possible? Is this some misconfiguration on my part?
We're also seeing 8~14gb of "data transfer out" each day and I really can't say why.
It's my first time using AWS (we're also using S3 but I've checked the reports and everything is accurate there) so I'm kinda lost.
For context, our application is built in JSF2 and we use Hibernate. We also have a web service running on PHP for a mobile application. We expect anywhere between 20~200 users daily 24/7.
I've set up the security groups to only allow inbound from our servers (and I removed all rules for outbound, is that fine?).
Our instance: Single-AZ class db.t2.micro