Related
I have the following hosting targets in my firebase.json file
{
"hosting": [
{
"target": "staging",
"public": "build",
"ignore": [
"firebase.json",
"**/.*",
"**/node_modules/**"
],
"rewrites": [
{
"source": "**",
"destination": "/index.html"
}
]
},
{
"target": "production",
"public": "build",
"ignore": [
"firebase.json",
"**/.*",
"**/node_modules/**"
],
"rewrites": [
{
"source": "**",
"destination": "/index.html"
}
]
}
]
}
And my firebaserc file contains the following:
{
"projects": {
"default": "project-name177137"
},
"targets": {
"project-name177137": {
"hosting": {
"staging": [
"project-name177137"
],
"production": [
"productionName"
]
}
}
}
}
If I want to deploy to all I usually just do firebase deploy.
Now imagine I want to deploy to only staging for testing, what firebase command can I use to achieve that?
Thank you.
firebase deploy --only hosting:staging
documentation
I am trying to deploy a 3-tier architecture to Azure using the Azure PowerShell CLI and a customized ARM template with parameters. I am not having any issues with the powershell script or the template's validity.
Within the template, among other things are two Virtual Machine Scale Sets, one for the front-end and one for the back-end. Front-end is windows and back-end is red hat. The front-end is behind an application gateway while the back-end is behind a load balancer. What's weird is that the front-end VMSS is deploying no problem and all is well. The back-end VMSS fails every time I try to deploy it, with a vague "Unknown network allocation error" message that I have no idea how to debug (since it provides no specifics unlike all of my other error messages so far).
I based the ARM template on an exported template from a working model of this architecture in another resource group and modified the parameters and have spent a while cleaning up issues and errors with Azure's exported template. I have tried deleting and starting from scratch but it doesn't seem to fix this problem. I thought it was possible I reached the limit of free-subscription processors so I tried making the front-end VMSS dependent on the back-end VMSS so the back-end VMSS would be created first, but the same issue still happened.
Here is the back-end VMSS part of the template:
{
"type": "Microsoft.Compute/virtualMachineScaleSets",
"apiVersion": "2018-10-01",
"name": "[parameters('virtualMachineScaleSets_JakeAppBESS_name')]",
"location": "westus2",
"dependsOn": [
"[parameters('loadBalancers_JakeAppBESSlb_name')]"
],
"sku": {
"name": "Standard_B1ls",
"tier": "Standard",
"capacity": 1
},
"properties": {
"singlePlacementGroup": true,
"upgradePolicy": {
"mode": "Manual"
},
"virtualMachineProfile": {
"osProfile": {
"computerNamePrefix": "jakeappbe",
"adminUsername": "Jake",
"adminPassword": "[parameters('JakeApp_Password')]",
"linuxConfiguration": {
"disablePasswordAuthentication": false,
"provisionVMAgent": true
},
"secrets": []
},
"storageProfile": {
"osDisk": {
"createOption": "FromImage",
"caching": "ReadWrite",
"managedDisk": {
"storageAccountType": "Premium_LRS"
}
},
"imageReference": {
"publisher": "RedHat",
"offer": "RHEL",
"sku": "7.4",
"version": "latest"
}
},
"networkProfile": {
"networkInterfaceConfigurations": [
{
"name": "[concat(parameters('virtualMachineScaleSets_JakeAppBESS_name'), 'Nic')]",
"properties": {
"primary": true,
"enableAcceleratedNetworking": false,
"dnsSettings": {
"dnsServers": []
},
"enableIPForwarding": false,
"ipConfigurations": [
{
"name": "[concat(parameters('virtualMachineScaleSets_JakeAppBESS_name'), 'IpConfig')]",
"properties": {
"subnet": {
"id": "[concat('/subscriptions/', parameters('subscription_id'), '/resourceGroups/', parameters('resource_Group'), '/providers/Microsoft.Network/virtualNetworks/', parameters('virtualNetworks_JakeAppVnet_name'), '/subnets/BEsubnet')]"
},
"privateIPAddressVersion": "IPv4",
"loadBalancerBackendAddressPools": [
{
"id": "[concat('/subscriptions/', parameters('subscription_id'), '/resourceGroups/', parameters('resource_Group'), '/providers/Microsoft.Network/loadBalancers/', parameters('loadBalancers_JakeAppBESSlb_name'), '/backendAddressPools/bepool')]"
}
],
"loadBalancerInboundNatPools": [
{
"id": "[concat('/subscriptions/', parameters('subscription_id'), '/resourceGroups/', parameters('resource_Group'), '/providers/Microsoft.Network/loadBalancers/', parameters('loadBalancers_JakeAppBESSlb_name'), '/inboundNatPools/natpool')]"
}
]
}
}
]
}
}
]
},
"priority": "Regular"
},
"overprovision": true
}
},
For reference, here's the front-end VMSS's part of the template so you can compare and see that there aren't many differences:
` {
"type": "Microsoft.Compute/virtualMachineScaleSets",
"apiVersion": "2018-10-01",
"name": "[parameters('virtualMachineScaleSets_JakeAppFESS_name')]",
"location": "westus2",
"dependsOn": [
"[parameters('applicationGateways_JakeAppFE_AG_name')]",
],
"sku": {
"name": "Standard_B1ls",
"tier": "Standard",
"capacity": 1
},
"properties": {
"singlePlacementGroup": true,
"upgradePolicy": {
"mode": "Manual"
},
"virtualMachineProfile": {
"osProfile": {
"computerNamePrefix": "jakeappfe",
"adminUsername": "Jake",
"adminPassword": "[parameters('JakeApp_Password')]",
"windowsConfiguration": {
"provisionVMAgent": true,
"enableAutomaticUpdates": true
},
"secrets": []
},
"storageProfile": {
"osDisk": {
"createOption": "FromImage",
"caching": "ReadWrite",
"managedDisk": {
"storageAccountType": "Premium_LRS"
}
},
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "2016-Datacenter",
"version": "latest"
}
},
"networkProfile": {
"networkInterfaceConfigurations": [
{
"name": "[concat(parameters('virtualMachineScaleSets_JakeAppFESS_name'), 'Nic')]",
"properties": {
"primary": true,
"enableAcceleratedNetworking": false,
"dnsSettings": {
"dnsServers": []
},
"enableIPForwarding": false,
"ipConfigurations": [
{
"name": "[concat(parameters('virtualMachineScaleSets_JakeAppFESS_name'), 'IpConfig')]",
"properties": {
"subnet": {
"id": "[concat('/subscriptions/', parameters('subscription_id'), '/resourceGroups/', parameters('resource_Group'), '/providers/Microsoft.Network/virtualNetworks/', parameters('virtualNetworks_JakeAppVnet_name'), '/subnets/FEsubnet')]"
},
"privateIPAddressVersion": "IPv4",
"applicationGatewayBackendAddressPools": [
{
"id": "[concat('/subscriptions/', parameters('subscription_id'), '/resourceGroups/', parameters('resource_Group'), '/providers/Microsoft.Network/applicationGateways/', parameters('applicationGateways_JakeAppFE_AG_name'), '/backendAddressPools/appGatewayBackendPool')]"
}
]
}
}
]
}
}
]
},
"priority": "Regular"
},
"overprovision": true
}
},
I expected them to both behave similarly. Granted, back-end is RH linux while front-end is windows, and the front-end is behind an application gateway while the back-end is behind a load balancer, but this setup is working perfectly fine in my other resource group that was deployed through the portal instead of through ARM. But every time I try to deploy this I get this error:
New-AzureRmResourceGroupDeployment : 1:30:56 AM - Resource Microsoft.Compute/virtualMachineScaleSets 'ProdBESS' failed with message '{
"status": "Failed",
"error": {
"code": "ResourceDeploymentFailure",
"message": "The resource operation completed with terminal provisioning state 'Failed'.",
"details": [
{
"code": "NetworkingInternalOperationError",
"message": "Unknown network allocation error."
}
]
}
}'
Okay I finally figured out what the issue was, so if anyone searching finds this thread in the future having the same error:
Apparently the part of the template dealing with the load balancer for the VMSS (which was exported from azure portal) had two conflicting inbound nat pools (overlapping port ranges). Once I deleted the part of the template creating the conflicting extra nat pool my VMSS deployed properly without issue.
No idea at all why the azure portal exported me a template with an extra nat pool that had never existed (there was only 1 on the original LB I exported the template from).
I have different swagger files for multiple APIs, like swagger1.json for OpenStack API, swagger2.json for Users API etc and I was trying to merge these all swagger files in one single file using remote $ref method.
Here is swagger1.json file for OpenStack api
{
"stacks": {
"get": {
"tags": [
"openstack"
],
"summary": "Returns stacks from OpenStack",
"description": "Returns all stacks from the OpenStack based on tenantId.",
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"parameters": [
{
"in": "query",
"type": "string",
"name": "tenantId",
"description": "search stacks for tenantId from OpenStack",
"required": false
}
],
"responses": {
"200": {
"description": "OK"
},
"400": {
"description": "Unknown Error"
},
"401": {
"description": "Unauthorized"
}
}
}
}
}
And this is the swagger-merge.json where I want to add multiple swagger doc file using remote reference.
{
"swagger": "2.0",
"info": {
"description": "something here",
"version": "v0.7.0",
"title": "The API Gateway",
"contact": {
"email": "dp#gmail.com"
}
},
"host": "localhost",
"port":"9191",
"basePath": "/api/openstack",
"tags": [
{
"name": "OpenStackApi",
"description": "Get stacks and running instance form OpenStack"
}
],
"schemes": [
"https"
],
"paths": {
"$ref": "./swagger1.json#/stacks"
}
}
This isn't working for me. I am not able to see API methods I have written inside swagger1.json file. I have upload swaggerUI output. Any idea about what I am doing wrong and how can I solve this issue?
You can't $ref the whole contents of paths, you can only refer individual path items:
"paths": {
"/stacks": { <--- endpoint path
"$ref": "./swagger1.json#/stacks"
}
}
I want to automate the cluster creation task in EMR. I have a json file
which contains configurations which need to be applied on new cluster and I want to write a shell script which automates this task for me.
Is it possible to create cluster in EMR by giving all the configurations from json file?
For example I have this file
{
"Cluster": {
"Ec2InstanceAttributes": {
"EmrManagedMasterSecurityGroup": "sg-00b10b71",
"RequestedEc2AvailabilityZones": [],
"AdditionalSlaveSecurityGroups": [],
"AdditionalMasterSecurityGroups": [],
"RequestedEc2SubnetIds": [
"subnet-02291b3e"
],
"Ec2SubnetId": "subnet-02291b3e",
"IamInstanceProfile": "EMR_EC2_DefaultRole",
"Ec2KeyName": "perf_key_pair",
"Ec2AvailabilityZone": "us-east-1e",
"EmrManagedSlaveSecurityGroup": "sg-f2b30983"
},
"Name": "NitinJ-Perf",
"ServiceRole": "EMR_DefaultRole",
"Tags": [
{
"Value": "Perf-Nitink",
"Key": "Qubole"
}
],
"Applications": [
{
"Version": "3.7.2",
"Name": "Ganglia"
},
{
"Version": "2.7.3",
"Name": "Hadoop"
},
{
"Version": "2.1.1",
"Name": "Hive"
},
{
"Version": "0.16.0",
"Name": "Pig"
},
{
"Version": "0.8.4",
"Name": "Tez"
}
],
"MasterPublicDnsName": "ec2-34-229-254-217.compute-1.amazonaws.com",
"ScaleDownBehavior": "TERMINATE_AT_INSTANCE_HOUR",
"InstanceGroups": [
{
"RequestedInstanceCount": 4,
"Status": {
"Timeline": {
"ReadyDateTime": 1499150835.979,
"CreationDateTime": 1499150533.99
},
"State": "RUNNING",
"StateChangeReason": {
"Message": ""
}
},
"Name": "Core Instance Group",
"InstanceGroupType": "CORE",
"EbsBlockDevices": [],
"ShrinkPolicy": {},
"Id": "ig-34P3CVF8ZL5CW",
"Configurations": [],
"InstanceType": "r3.4xlarge",
"Market": "ON_DEMAND",
"RunningInstanceCount": 4
},
{
"RequestedInstanceCount": 1,
"Status": {
"Timeline": {
"ReadyDateTime": 1499150804.591,
"CreationDateTime": 1499150533.99
},
"State": "RUNNING",
"StateChangeReason": {
"Message": ""
}
},
"Name": "Master Instance Group",
"InstanceGroupType": "MASTER",
"EbsBlockDevices": [],
"ShrinkPolicy": {},
"Id": "ig-3V7EHQ36187PY",
"Configurations": [],
"InstanceType": "r3.4xlarge",
"Market": "ON_DEMAND",
"RunningInstanceCount": 1
}
],
"Configurations": [
{
"Properties": {
"hive.vectorized.execution.enabled": "true"
},
"Classification": "hive-site"
}
]
}
}
Can I create a cluster on EMR by using some command like
aws emr create-cluster --cli-input-json file://'pwd'/emr_cluster_up.json
There is no such option through the AWS CLI as per the AWS CLI documentation. But if you want to automate the EMR cluster creation using a JSON file. You can use Cloud formation and automate the cluster creation.
Getting Started with AWS CloudFormation
i'm stuck with Openshift (Origin) and need some help.
Let's say i want to add a grafana deployment via CLI to a new started cluster.
What i do:
Upload a template to my openshift cluster (oc create -f openshift-grafana.yml)
Pull the necessary image from the docker hub (oc import-image --confirm grafana/grafana)
Build a new app based on my template (oc new-app grafana)
These steps creates the deployment config and the routes.
But then i'm not able to start a deployment via CLI.
# oc deploy grafana
grafana deployment #1 waiting on image or update
# oc rollout latest grafana
Error from server (BadRequest): cannot trigger a deployment for "grafana" because it contains unresolved imagesenter code here
In the openshift web console it looks like this:
The images is there, even the link is working. In the web console i can click "deploy" and it's working. But nevertheless i'm not able to rollout a new version via command line.
The only way it works is editing the deployment yml so openshift recognizes a change a starts a deployment based on "config change" (hint: i'm was not changing the image or image name)
There is nothing special in my template, it was just an export via oc export from a working config.
Any hint would be appreciated, i'm pretty much stuck.
Thanks.
I had this same issue and I solved it by adding:
lastTriggeredImage: >-
mydockerrepo.com/repo/myimage#sha256:xxxxxxxxxxxxxxxx
On:
triggers:
- type: ImageChange
imageChangeParams:
Of the deploymentconfig yaml. Looks like if it doesn't know what the last triggered image is, it wont be able to resolve it.
Included below is the template you can use as a starter. Just be aware that the grafana image appears to require it be run as root, else it will not startup. This means you have to override the default security model of OpenShift and enable allowing running of images as root in the project. This is not recommended. The grafana images should be fixed so as not to require they be run as root.
To enable running as root, you would need to run as a cluster admin:
oc adm policy add-scc-to-user anyuid -z default -n myproject
where myproject is the name of the project you are using.
I applied it to the default service account, but better you create a separate service account, apply it to that and then change the template so that only grafana runs as that service account.
It is possible that the intent is that you override the default settings through the grafana.ini file so it uses your mounted emptyDir directories and then it isn't an issue. I didn't attempt to provide any override config.
The template for grafana would then be as follows. Note I have used JSON as I find it easier to work with JSON, but also to avoid indenting being screwed up making the YAML impossible to use.
Before you use this template, you should obviously create the corresponding config map where name is of form ${APPLICATION_NAME}-config where ${APPLICATION_NAME} is grafana unless you override it when using the template. The key in the config map should be grafana.ini and then have as value the config file contents.
{
"apiVersion": "v1",
"kind": "Template",
"metadata": {
"name": "grafana"
},
"parameters": [
{
"name": "APPLICATION_NAME",
"value": "grafana",
"from": "[a-zA-Z0-9]",
"required": true
}
],
"objects": [
{
"apiVersion": "v1",
"kind": "ImageStream",
"metadata": {
"name": "${APPLICATION_NAME}-img",
"labels": {
"app": "${APPLICATION_NAME}"
}
},
"spec": {
"tags": [
{
"name": "latest",
"from": {
"kind": "DockerImage",
"name": "grafana/grafana"
}
}
]
}
},
{
"apiVersion": "v1",
"kind": "DeploymentConfig",
"metadata": {
"name": "${APPLICATION_NAME}",
"labels": {
"app": "${APPLICATION_NAME}",
"type": "monitoring"
}
},
"spec": {
"replicas": 1,
"selector": {
"app": "${APPLICATION_NAME}",
"deploymentconfig": "${APPLICATION_NAME}"
},
"template": {
"metadata": {
"labels": {
"app": "${APPLICATION_NAME}",
"deploymentconfig": "${APPLICATION_NAME}",
"type": "monitoring"
}
},
"spec": {
"containers": [
{
"name": "grafana",
"image": "${APPLICATION_NAME}-img:latest",
"imagePullPolicy": "Always",
"livenessProbe": {
"failureThreshold": 3,
"httpGet": {
"path": "/",
"port": 3000,
"scheme": "HTTP"
},
"periodSeconds": 10,
"successThreshold": 1,
"timeoutSeconds": 1
},
"ports": [
{
"containerPort": 3000,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"mountPath": "/etc/grafana",
"name": "grafana-1"
},
{
"mountPath": "/var/lib/grafana",
"name": "grafana-2"
},
{
"mountPath": "/var/log/grafana",
"name": "grafana-3"
}
]
}
],
"volumes": [
{
"configMap": {
"defaultMode": 420,
"name": "${APPLICATION_NAME}-config"
},
"name": "grafana-1"
},
{
"emptyDir": {},
"name": "grafana-2"
},
{
"emptyDir": {},
"name": "grafana-3"
}
]
}
},
"test": false,
"triggers": [
{
"type": "ConfigChange"
},
{
"imageChangeParams": {
"automatic": true,
"containerNames": [
"grafana"
],
"from": {
"kind": "ImageStreamTag",
"name": "${APPLICATION_NAME}-img:latest"
}
},
"type": "ImageChange"
}
]
}
},
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "${APPLICATION_NAME}",
"labels": {
"app": "${APPLICATION_NAME}",
"type": "monitoring"
}
},
"spec": {
"ports": [
{
"name": "3000-tcp",
"port": 3000,
"protocol": "TCP",
"targetPort": 3000
}
],
"selector": {
"deploymentconfig": "${APPLICATION_NAME}"
},
"type": "ClusterIP"
}
},
{
"apiVersion": "v1",
"kind": "Route",
"metadata": {
"name": "${APPLICATION_NAME}",
"labels": {
"app": "${APPLICATION_NAME}",
"type": "monitoring"
}
},
"spec": {
"host": "",
"port": {
"targetPort": "3000-tcp"
},
"to": {
"kind": "Service",
"name": "${APPLICATION_NAME}",
"weight": 100
}
}
}
]
}
For me, I had the image name incorrectly under 'from':
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- alcatraz-ha
from:
kind: ImageStreamTag
name: 'alcatraz-haproxy:latest'
namespace: alcatraz-ha-dev
type: ImageChange
I had name: 'alcatraz-ha:latest' so it could not find the image
Make sure that spec.triggers.imageChangeParams.from.name exists as image stream
triggers:
- imageChangeParams:
from:
kind: ImageStreamTag
name: 'myapp:latest' # Does "myapp" exist if you run oc get is ????!!!