Is it possible to run a local db with cloud backup where if the local db fails it changes over to cloud, so that new data stores in cloud until the failure is resovled, and then once resolved new data is copied to the local db and then the system resumes to use the local db. Any solution can be accepted ie aws or azure but how do i set it up where the db login address and configuration within the application stays the same.
I searched a lot and according my experience, there isn't any tools or Azure Service can do that.
I think it's impossible for now.
Related
I was wondering if custom metadata for google compute engine VM instances was an appropriate place to store sensitive information for configuring apps that run on the instance.
So we use container-optimised OS images to run microservices. We configure the containers with environment variables for things like creds for db connections and other systems we integrate with.
The VMs are treated as ephemeral for each CD deployment and the best I have come up with so far is to create an instance template with config values loaded via a file I keep on my local machine into the VM custom metadata, which is then made available to a systemctl unit when the VM starts up (cloud-config).
The essence of this means environment variable values (some containing creds) are uploaded by me (which don't change very much) and are then pulled from the VM instance metadata server when a new VM is fired up. So I'm just wondering if there's any significant security concerns with this approach...
Many thanks for your help
According to the Compute Engine documentation :
Is metadata information secure?
When you make a request to get
information from the metadata server, your request and the subsequent
metadata response never leaves the physical host running the virtual
machine instance.
Since the request and response are not leaving the physical host, you will not be able to access the metadata from another VM or from outside Google Cloud Platform. However, any user with access the VM will be able to query the metadata server and retrieve the information.
Based on the information you provided, storing credentials for a test or staging environment in this manner would be acceptable. However, if this is a production system with customer or information important to the business, I would keep the credentials in a secure store that tracks access. The data in the metadata server is not encrypted, and accesses are not logged.
I made an API REST with Spring boot, connected to an existing MySQL database. This database is not hosted on my local.
The API works fine on my local but I want to deploy it on AWS.
Is it possible to use this remote MySQL database or do I need to use a new one hosted on AWS?
If it is possible, can you guys link any tutorial or documentation? I can't find anything related to this particular issue.
Thank you!
yes, AWS does not limit you to using only their RDS (Relational Database Services) offerings. Configuration of the DB will be the same (or similar if you want to use other instance than one used for your local development) as for your local environment.
Application hosted in aws can be connected to both cloud db and on-perm dB.only thing we need to check is security groups configured in ec2 along with other DB configurations.
I have an Azure App Service and an Azure Storage Account. I know there is a server/vm behind the app service, but I have not explicitly started a machine.
I'm trying to import data from an access database which will be regularly uploaded to a fileshare in my storage account. I'd like to use an Azure WebJob to do the work in the background.
I'm trying to use DAO to read the data:
string path = #"\\server\share\folder\datbase.mdb";
DBEngine dbe = new DBEngine();
Database db = dbe.OpenDatabase(path);
DAO.Recordset rs = db.OpenRecordset("select * from ...");
This works when I run it locally, but when I try to run it in my web job accessing a fileshare in my storage account, it is not finding the file. I assume this is because DBEngine knows nothing of Azure and Azure account names and security keys, doesn't send them and Azure Storage doesn't respond.
So what I'd like to try is to see if I can map an Azure Storage Fileshare onto the server underlying my App Service. I've tried a number of different things, but have received variations of "Access Denied" each time. I have tried:
Running net use T: \name.file.core.windows.net\azurefileshare
/u:name key from the App Service consoles in the Azure Portal
Running
net use from a process within my webjob
Invoking WNetAddConnection2
from within my webjob
Looks like the server is locked down tight. Does anyone have any ideas on how I might be able to map the fileshare onto the underlying server?
Many thanks
As I know, Azure web app runs in sandbox. we could not map an Azure file share to Azure web app. So Azure file storage is a good place if you choose Azure web app. From my experience, there are below workarounds for you. Hope this could give you some tips.
1) Use Azure file storage, but choose Azure VM or Cloud service as host service.
2) Still choose Azure web app as host service, but include the access db in the solution and upload to Azure web app.
3) Choose SQL Azure as database instead. Here is the article that could help us to migrate the access database to SQL Azure
In the end, as Jambor rightly says, the App Service VM is locked down tight.
However, it turns out that the App Service VM comes with some local temporary storage for the use of the various components running on the VM.
This is at D:\local\Temp\ and can be written to by a web job.
Interestingly, this is a logical folder on a different share/drive from D:\local and the size of this additional storage is dependent on the App Service's scale.
I have deployed my web server which requires a MySQL database for storage. I've created a Second Generation MySQL instance with one failover replica but I am not sure how I can connect to those.
I am not sure how to configure these instances and what I have to consider here e.g. region/zone. Flexible Environment appears to be unavailable in Europe unfortunately - at the moment at least - so I guess I'll have to place the SQL instances in the US too.
Will those instances have to be in the same local network or can they communicate over regions? Will I even be able to control this or will all this be decided by Google Cloud?
Could anybody who has done this before give me a few details about what to do here?
For best performance, you should place your App Engine instances in the same region.
For information on how to connect from your application to the Cloud SQL MySQL instance see the following documentation: https://cloud.google.com/sql/docs/dev-access#gaev2-csqlv2
The short summary is that you have to modify your app.yaml file to list the Cloud SQL instances you will be connecting to. Once that's done, a local socket will appear inside the App Engine VM that will allow you to connect to your Cloud SQL instance.
BACKGROUND-
I am planning to make a website that will accept data from users to store them in a database(MySQL).The website would be served from google cloud servers.I have installed MAMP on my mac for web development.
PROBLEM-
Google cloud services also provide Cloud SQL.Now I have a few doubts-
1)Once I finish designing my website on MAMP and want to deploy it on cloud servers I would have database settings of my local machine.Does this mean that before putting it on cloud and in order to use Cloud SQL as database I would have to change code on back-end side that specifies database settings?If yes then how tedious is it to do so?(Changing database from testing environment from MySQL to deployment environment Cloud SQL).
2)Also is there a way to use cloud and not use Cloud SQL?
3)What else combination can be chosen with database to deploy website on cloud?
Usually changing the database needs huge efforts(testing and some config changes) as all the databases provide many additional features which doesn't work directly on another database.
You can use Cloud(Cloud SQL is just part of it).
But the Cloud SQL is mysql only as per the information given on the below link by google
https://cloud.google.com/products/
So, it should not be a big deal for you to migrate the project to cloud from your local system. Only you have to configure the connection details(it will not be simply localhost).