I am trying to deploy a global forwarding rule. My yaml file is below
resources:
- name: rule
type: compute.v1.forwardingRule
properties:
portRange: 80-80
IPProtocol: TCP
target: projects/{{ env["project"] }}/global/targetHttpProxies/myproxy
IPAddress: xx.xx.xx.xx
When i run the command :
gcloud deployment-manager deployments create grule --config test.yaml
It is giving error saying resource properties region is required. It is asking for region, but i am trying to create a global forwarding rule for which I need not give region.
Maybe it should be compute.v1.globalForwardingRule?
Related
I am trying to implement automatic deployments for my Windows Kubernetes container app. I'm following instructions from the Google's windows-builder, but the trigger quickly fails with this error at about 1.5 minutes in:
2021/12/16 19:30:06 Set ingress firewall rule successfully
2021/12/16 19:30:06 Failed to get external IP address: Could not get external NAT IP from list
ERROR
ERROR: build step 0 "gcr.io/[my-project-id]/windows-builder" failed: step exited with non-zero status: 1
The container, gcr.io/[my-project-id]/windows-builder, definitely exists and it's located in the same GCP project as the Cloud Build trigger just as the windows-builder documentation commanded.
I structured my code based off of Google's docker-windows example. Here is my repository file structure:
repository
cloudbuild.yaml
builder.ps1
worker
Dockerfile
Here is my cloudbuild.yaml:
steps:
# WORKER
- name: 'gcr.io/[my-project-id]/windows-builder'
args: [ '--command', 'powershell.exe -file build.ps1' ]
# OPTIONS
options:
logging: CLOUD_LOGGING_ONLY
Here is my builder.ps1:
docker build -t gcr.io/[my-project-id]/test-worker ./worker;
if ($?) {
docker push gcr.io/[my-project-id]/test-worker;
}
Here is my Dockerfile:
FROM gcr.io/[my-project-id]/test-windows-node-base:onbuild
Does anybody know what I'm doing wrong here? Any help would be appreciated.
Replicated the steps from GitHub and got the same error. It is throwing Failed to get external IP address... error because the External IP address of the VM is disabled by default in the source code. I was able to build it successfully by adding '--create-external-ip', 'true' in cloudbuild.yaml.
Here is my cloudbuild.yaml:
steps:
- name: 'gcr.io/$PROJECT_ID/windows-builder'
args: [ '--create-external-ip', 'true',
'--command', 'powershell.exe -file build.ps1' ]
I was following the tutorial on https://cloud.google.com/tpu/docs/how-to.
I created a TPU instance, and tried to connect to it with gcloud compute ssh line. Then, this error occurred.
AppData\Local\Google\Cloud SDK>gcloud compute ssh node-1 --zone=asia-east1-c
PythonERROR: (gcloud.compute.ssh) Could not fetch resource:
- The resource 'projects/project-masker/zones/asia-east1-c/instances/node-1' was not found
Trying to solve this error, I found out that the tpus were not included in the execution group.
AppData\Local\Google\Cloud SDK>gcloud compute tpus list
PythonNAME ZONE ACCELERATOR_TYPE NETWORK RANGE STATUS
node-2 asia-east1-c v2-8 default 10.75.202.248/29 READY
node-1 asia-east1-c v2-8 default 10.82.81.168/29 READY
AppData\Local\Google\Cloud SDK>gcloud compute tpus execution-groups list
PythonListed 0 items.
This is what I got when I tried to restart the tpu.
PythonRequest issued for: [node-1]
Waiting for operation [projects/project-masker/locations/asia-east1-c/operations/operation-1625299249870-5c633787137b9-
e14800b7-d997be6b] to complete...done.
done: true
metadata:
'#type': type.googleapis.com/google.cloud.common.OperationMetadata
apiVersion: v1
cancelRequested: false
createTime: '2021-07-03T08:00:49.884674545Z'
endTime: '2021-07-03T08:01:31.161199334Z'
target: projects/project-masker/locations/asia-east1-c/nodes/node-1
verb: update
name: projects/project-masker/locations/asia-east1-c/operations/operation-1625299249870-5c633787137b9-e14800b7-d997be6b
response:
'#type': type.googleapis.com/google.cloud.tpu.v1.Node
acceleratorType: v2-8
apiVersion: V1
cidrBlock: 10.82.81.168/29
createTime: '2021-07-03T07:27:41.148997156Z'
health: HEALTHY
ipAddress: 10.82.81.170
name: projects/project-masker/locations/asia-east1-c/nodes/node-1
network: global/networks/default
networkEndpoints:
- ipAddress: 10.82.81.170
port: 8470
port: '8470'
schedulingConfig: {}
serviceAccount: service-...#cloud-tpu.iam.gserviceaccount.com
state: READY
tensorflowVersion: pytorch-1.9
I tried to find some related articles on google, but I couldn't find any. How can I fix this?
You can't SSH to a TPU node directly, so gcloud compute ssh {tpu_name} isn't expected to work.
You can, however, SSH directly to a TPU VM, please see this link. If you are already using TPU VM, then your issue is that you're trying
gcloud compute ssh
rather than
gcloud alpha compute tpus tpu-vm ssh ...
EDITED:
I have a service running in OpenShift on 2 pods, let's call them P1 and P2.
The service does two things:
An API
We listen to Kafka messages from a topic and then process them.
Is there a way I can restrict all calls made to API only to P1 and all calls for Kafka only to P2 ?
My suggestion may not fit with your requests, but if each one pod is running in a specific project, then it would be available as follows.
First, you should configure pod's source IP statically using Egress IP based on project level, refer Enabling Static IPs for External Project Traffic for more details.
$ oc patch netnamespace p1_project -p '{"egressIPs": ["1.1.1.1"]}'
$ oc patch netnamespace p2_project -p '{"egressIPs": ["2.2.2.2"]}'
After that, you can allow each pod IP based on whitelist, refer Route-specific IP Whitelists for more details.
kind: Route
metadata:
name: R1
annotations:
haproxy.router.openshift.io/ip_whitelist: 1.1.1.1
kind: Route
metadata:
name: R2
annotations:
haproxy.router.openshift.io/ip_whitelist: 2.2.2.2
I hope it help you.
I'm trying to setup a pod which receives packets for port 1234 coming from external hosts. I confirmed via tcpdump that the packets are indeed arriving at the openshift cluster. Now, I have pod AAAA running already which supposed to get the packets for port 1234 (routed or forwarded from the openshift master). We already have assigned an IP for the pod so the docs below has been followed thoroughly to setup the externalIP, ports, etc. I suspect the issue is with the master-config but I cant paste them here.
My question is what are the configs necessary to be put in place in the master-config in order to route port 1234 packets to pod AAAA.
Tried already below Openshift docs:
https://docs.openshift.com/container-platform/3.3/admin_guide/tcp_ingress_external_ports.html
https://docs.openshift.com/container-platform/3.3/dev_guide/getting_traffic_into_cluster.html#using-ingress-IP-self-service
First of all - You are only referring to a POD. I would recommend to deploy your app as a Deployment rather. Please refer to this and this.
Additionally, in order to expose Deployments to the outside world in Kubernetes you have to establish a Service. It can expose your app in a few different ways. Please read through this for the details.
If you using any standard app you can usually find an example deployment/service by googling the name of the app and 'kubernetes'.
In your master config (etc/origin/master/master-config.yaml), just add
servicesNodePortRange: "1234-1234"
kubernetesMasterConfig:
apiServerArguments:
controllerArguments:
masterCount: 1
masterIP: x.x.x.x
podEvictionTimeout:
proxyClientInfo:
certFile: master.proxy-client.crt
keyFile: master.proxy-client.key
schedulerArguments:
schedulerConfigFile: /etc/origin/master/scheduler.json
servicesNodePortRange: "1234-1234"
servicesSubnet: 172.30.0.0/16
staticNodeNames: []
After that, restart atomic-openshift-master service.
Then, create a second service for your deployment with a load balancer type. Assuming your deployment config name is "myapp", create new file similar below,
--- "new-svc.yml" ----
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: myapp
template: myapp-template
name: myapp-ext
spec:
ports:
- name: myapp
nodePort: 1234
port: 1234
protocol: TCP
targetPort: 1234
selector:
name: myapp
sessionAffinity: None
type: LoadBalancer
After that, create a new service
#oc create -f new-svc.yml
Finally, expose the new service "myapp-ext" by adding route (1234 <-- 1234).
I have created a forwarding rule via HTTP load balancing in Google Cloud Platform. When trying to describe it, I get an error. How can I describe the forwarding rule on the command line? I'm sure that's the correct region and I've tried selecting all the region choices.
gcloud compute forwarding-rules list
NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
my-forwarding X.X.X.X TCP my-target-proxy
gcloud compute forwarding-rules describe my-forwarding
For the following forwarding rules:
- [gondolin-forwarding]
choose a region:
[1] asia-east1
[2] europe-west1
[3] us-central1
[4] us-east1
Please enter your numeric choice: 3
ERROR: (gcloud.compute.forwarding-rules.describe) Could not fetch resource:
- The resource 'projects/my-project/regions/us-central1/forwardingRules/my-forwarding' was not found
As far as I know forwarding rules may be defined both as regional (i.e. existing in a given region) as well as global resources. It looks like your resource dos not exist in the region us-central1. Could you please try to find it in the global namespace:
gcloud compute forwarding-rules describe my-forwarding --global