I am trying to integrate APIM and AKS. where APIM will be public facing and AKS will be inside the VNet. I have followed this article https://pumpingco.de/blog/control-ingress-to-aks-with-azure-api-management and did the following.
created a new subnet for the apim and new one for the api service
updated the APIM vnet settings as external and map it to the newly created subnet.
after that I have deployed the service in AKS, the service is getting the IP of the api service subnet. when I did port forward I am able to access the site. but from the APIM if I enter the details of the swagger its saying unable to access. I am new to APIM, is there any way I can try to ping or something like to make sure that its accessible ?
Here is the kubetctl command returns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
todosvc LoadBalancer 10.0.196.84 10.0.1.4 8002:30409/TCP 19m
Here is the Network.
Update: I am able to access the endpoint when I enter it directly. I think the problem is with importing the swagger.
It's a mistake from my end. I was entering the swagger url, instead it should be path for json file. for e.g for our .net core application instead of entering http://url/swagger.index.html it should be https://url/swagger/v1/swagger.json
or we can import swagger file like json or yaml.
Related
I have a bunch of app services listed in API management. These services call third party clients who want to whitelist my IP
I would like to give them the public IP address of the APIM instance. I tried to check this by having my app service hosted in APIM call a dummy function app I had created. In the dummy function app I logged the header details.
It appeared that the IP coming through was that of the app service and not the APIM instance. I was expecting (and hoping) it to be the APIM IP
See on APIM overview page, public IP will be visible in the top section
APIM is only a gateway in front of app service. It is not a host environment for app service. So if you call 3rd party services from within the app service, the IP of the caller will always be the app service. It won't be the IP of APIM. Actually the call won't go through APIM at all.
Firstly, the use case: I have a large Spring Boot monolith which is accompanied by some smaller go services which are used to perform some tasks. Currently they are hosted privately and simply on the same server and can therefor communicate internally using localhost. I am looking into deploying this to AWS as ElasticBeanstalks and is currently using free tier for evaluating this. I want the Spring Boot application to be publicly visible and the go services to be available for the Spring Boot application but not to the public. My impression is that I want to deploy them as separate ElasticBeanstalk environments but assign them to the same VPC. If that is the wrong assumption please let me know of the correct one!
If that is however what we want, then this is my current initial issue. I have VPC set up (with default values) and in my local repository I use eb init, eb create etc to deploy the application. When it is deployed and up an running and I go into Configuration in the AWS console of the EB then network part simply says This environment is not part of a VPC.. I've tested to select classic, application and network as the loadbalancer but with the same result. Do I need to do something during eb create instead?
I've tried eb create --vpc but honestly don't know what to fill in for all the prompts:
Enter the VPC ID: xxxxxxxxxxxxxxxxx
Do you want to associate a public IP address? (Y/n): Y
Enter a comma-separated list of Amazon EC2 subnets: ?
Enter a comma-separated list of Amazon ELB subnets: ?
Do you want the load balancer to be public? (Select no for internal) (Y/n): ?
Enter a comma-separated list of Amazon VPC security groups:
What should I be looking for to enter here? The VPC ID i assume is the VPC id of the VPC I have created but I am having difficulties understanding the rest of them. If I try to simply run eb create --vpc.id <XXXXXXXXXXXXXXXXXX> then I instead get ERROR: ServiceError - Configuration validation exception: Invalid option value: 'internal' (Namespace: 'aws:ec2:vpc', OptionName: 'ELBScheme'): Internal load balancers are valid only in a VPC; however, your environment is currently not running in a VPC.
Grateful for help!
You dont need two separate VPCs for your public facing applications. In the same vpc, you can create a load balancer as Internal and create another load balancer as Internet facing.
here is some information about the fields.
Enter the VPC ID: vpc-abc123
Do you want to associate a public IP address? (Y/n):
if internet facing, Yes.
this will assign a public ip address to the load balancer
Enter a comma-separated list of Amazon EC2 subnets: ?
You can enter the list of private subnets, private subnets cannot be accessed from the internet directly, thats why you create a public facing load balancer (for the internet facing application) receive the web traffic and forward to the instances.
Enter a comma-separated list of Amazon ELB subnets: ?
For Internet facing application, you need to choose public subnets.
For internal application, you need to choose private subnets
Do you want the load balancer to be public? (Select no for internal) (Y/n): ?
For Internet facing application, Yes
For internal application, No
Enter a comma-separated list of Amazon VPC security groups: The security should be created for the VPC. in other words, if you inspect the security group, you should see your vpc id.
I'm deploying WSO2 API manager 2.6.0 with an external MySQL database and I'm trying to have my API's persist when I change my deployment.
Currently I have 2 deployments using the same external database, one local and the other hosted on an AWS EKS cluster. When I create an API on my local deployment, I can only view it on my AWS deployment if I'm logged in to the store, and visa-versa for my localhost deployment.
The expected and desired behaviour is that all APIs created on both deployments should be displayed on the store no matter if I'm logged in or not, is there any configurations I can change to make the happen?
Here is the doc I used to configure the external database: https://docs.wso2.com/display/AM260/Installing+and+Configuring+the+Databases
I am trying to figure out what is involved to write a console application that will run as part of a VSTS Release task and that program will read a connection string (secret) from a preconfigured keyvault and then connect to an Azure SQL db using that connection string and apply some changes.
Currently I have my Web Apps connecting to KeyVault and the Azure SQL Server
using Azure AD Application Token authentication so I know what is involved on that front.
When you check "Allow scripts to access OAuth token" on agent settings page,
can this token be used (using ADAL) to connect to KeyVault and SQL Server.
(Assuming the VisualStudioSPNxxx has the appropriate access to the above resources).
If not what should I be looking for?
The vsts token (Allow scripts to access OAuth token) can’t be used to connect to KeyVault.
You need to register app with Azure Active Directory and enable to communicate with Azure Active Directory and Key Vault, then get the connectionstring dynamically.
More information, you can refer to: Protecting Secrets using VSTS and Azure Key Vault
This is made relatively very easy now with Variable Groups - https://learn.microsoft.com/en-us/vsts/pipelines/library/variable-groups?view=vsts
You can link a secret by connecting your Azure KV to a variable and then use this variable as you would normally use it in any script/task.
I have created a ClearDB MySQL instance on IBM Bluemix. Can I see the credentials (hostname, username, password etc) without binding the instance to an application running on Bluemix ?
Thank you, Sandhya
It depends if the service provider implemented the Service Keys feature. If they have, you can generate new credentials by clicking on "Service Credentials" on the service dashboard page.
The ClearDB currently requires you to bind it to a Cloud Foundry application for service credentials.