I have created a vault/key under a compartment.
As vault service is a regional service it is only available under the region I created it.
Even if tenancy subscribes to multiple region the compartment shows up but still Vault is not available for that region. Is there a way we could replicate Vault / Key /secrets while tenancy subscribes to multiple regions .
I have not done this myself, but you could try this approach and see if the following steps will work for you:
Step 1. Use the BackupKey/BackupVault API (from Vault Service) in the SOURCE region to create the relevant key/vault encrypted file(s).
Step 2. Use the CopyOBject API (from Object Storage Service) to copy the file(s) created in Step 1 from your SOURCE region to all DESTINATION regions.
Step 3. Use the RestoreKey/RestoreVault API (from Vault Service) to restore the key/vault in the DESTINATION regions. See
Related
I want to create a new API Management instance in EU and make sure my data in APIM can not leaves the EU geographic zone
Based on the Microsoft documentation, the data will not leave the configured region.
Data residency in Azure provides you with information about the data based on selected regions:
Most Azure services enable you to specify the region where your customer data will be stored and processed. Microsoft may replicate to other regions for data resiliency, but Microsoft will not store or process customer data outside the selected Geo. You and your users may move, copy, or access your customer data from any location globally.
Customers can configure the following Azure services, tiers, or plans to store customer data only in a single region:
https://learn.microsoft.com/en-us/azure/api-management/api-management-howto-deploy-multi-region
Only the gateway component of API Management is deployed to all regions. The service management component and developer portal are hosted in the Primary region only.
I would like to write a function using function app in microsoft azure to receive messages from an iot hub convert them from base64 format to string and store them in a container in BlobStorage. Could you please help me to do this?
Thanks in advance.
Br
Masoud
Create the Azure Function IoT Hub Trigger to receive the messages and this template comes by default when you create it.
Connection value should be provided in the local.settings.json file, which is IoT Hub Connection string, can get it from the Azure Portal > IoT Hub Resource > Buit-in endpoints under Hub Settings.
Runt the function where you will see your messages flowing from IoTHub to your Azure Functions (assuming you have devices or simulators connected that are sending data).
Refer to this article for step-by-step information on receiving the messages of IoT Hub to Azure Function.
To save those messages came from Sensors in Azure, you can use Azure Storage Account either Blob Container or Table Storage.
Step 1: Go to IoT Hub resource > Message routing > Custom Endpoints > Add the New Endpoint for the Storage Service.
Step 2: Give an endpoint name, pick a container created for storing the data (messages), encoding format either JSON or AVRO, File Name Format, Authentication Type, etc. in this window:
Now, we added Storage Account endpoint to route messages received by IoT Hub to the Azure Blob Container Storage.
Final Step is to add the route for routing data to the storage account.
Please visit these sources for detailed information:
Saving IoT Hub messages to Azure Blob Storage.
Saving IoT Hub Sensor Data to Azure Table Storage
Official Documentation of Azure Functions IoT Hub Trigger
Note: Azure Blob Storage is cost-effective (High Price). Basically, it is recommended this variant only for proof-of-concepts or very small, simple projects. Please refer this article for more information on using which storage account for IoT Hub Trigger to optimize cost.
I have three service accounts:
App engine default service account
Datastore service account
Alert Center API service account
My cloud functions uses Firestore in datastore mode for book keeping and invokes Alert Center API.
One can assign only one service account while deploying cloud functions.
Is there way similar to AWS where one can create multiple inline policies and assign it to default service account.
P.S. I tried creating custom service account but datastore roles are not supported. Also I do not want to store credentials in environment variables or upload credentials file with source code.
You're looking at service accounts a bit backwards.
Granted, I see how the naming can lead you in this direction. "Service" in this case doesn't refer to the service being offered, but rather to the non-human entities (i.e. apps, machines, etc - called services in this case) trying to access that offered service. From Understanding service accounts:
A service account is a special type of Google account that belongs to
your application or a virtual machine (VM), instead of to an
individual end user. Your application assumes the identity of the
service account to call Google APIs, so that the users aren't
directly involved.
So you shouldn't be looking at service accounts from the offered service perspective - i.e. Datastore or Alert Center API, but rather from their "users" perspective - your CF in this case.
That single service account assigned to a particular CF is simply identifying that CF (as opposed to some other CF, app, machine, user, etc) when accessing a certain service.
If you want that CF to be able to access a certain Google service you need to give that CF's service account the proper role(s) and/or permissions to do that.
For accessing the Datastore you'd be looking at these Permissions and Roles. If the datastore that your CFs need to access is in the same GCP project the default CF service account - which is the same as the GAE app's one from that project - already has access to the Datastore (of course, if you're OK with using the default service account).
I didn't use the Alert Center API, but apparently it uses OAuth 2.0, so you probably should go through Service accounts.
My task is to create mysql insided google cloud sql. Following instructions I try to set an instance unluckily. The problem is a message
"Authorized GAE applications must be in the same region as the database instance"
at the time when I have checked both instance and application for that region setting and it is matching. I don't know what shall I put in the box "authorized networks". Thanks in advance.
That message means you chose a region (EU for example) for your Cloud SQL that is different from the region of your App Engine application (US for example) where you created the Cloud SQL instance.
From the documentation
Note: An App Engine application must be in the same region (either
European Union or United States) as a Google Cloud SQL instance to be
authorized to access that Google Cloud SQL instance.
As the GAE location can't be changed, you should change the region of the Cloud SQL instance, which also can't be changed. So you'd need to create a new instance in the exact region of your app.
The Authorized networks is exactly what Paul said. The IPs or subnetworks you want to whitelist to access your instance, only if you plan to access your instance with mysql client.
I have a distributed Publisher (port 9446) and Store (port 9447). I'm starting them with the -Dprofile options per: http://docs.wso2.org/display/AM160/Running+the+Product+on+a+Preferred+Profile
and both components are configured as follows:
CarbonDB = wso2reg
User = wso2user
API = wso2API
Reg = wso2SharedRegistry (for governance and config).
When I create a new API on the Publisher and then publish to the gateway I see in the logs that it gets published:
INFO - API Initializing API: admin--CleanPhoneVerify:v1.0.0
But when I log into the Store on port 9447 (https://StorePubServer.domain.ext:9447/Store) I don't see the API.
However, when I log into the address (https://StorePubServer.domian.ext:9446/Store) I see it.
Question 1: Shouldn't the preferred profile start options prevent the Store from working on port 9446?
Question 2: Why don't I see the api on the Store running on port 9447 that I started with my -Dprofile option?
Answer 1
At the moment profiles doesn't remove the web applications, ie Store and Publisher apps. They only remove features coming through Jars by eliminating those jars which are not related to the given profile.
Answer 2
Please enable clustering in Store and Publisher by setting to same clustering domain. For that do the below changes to both Store and Publisher.
1. Open AM_HOME/repository/conf/axis2/axis2.xml and locate clustering configuration.
2. Make clustering true
<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent"
enable="true">
Set a clustering domain value. This should be same to both Store and Publisher.
<parameter name="domain">storepub.domain</parameter>
Restart the servers and try with a new API.