I have an application in Angular with PWA configured, besides caching assets/images I would also like to cache the images that are in Firebase Storage once they are loaded when I am Online.
My application makes use of the Cloud Firestore database with data persistence enabled. When I need to load the avatar of the authenticated user on the system in offline mode, it tries to load through the photoURL field, but since it is offline I can not load the image so the image is not displayed and this is not legal for the user.
In my code I load the image as follows:
<img class="avatar mr-0 mr-sm-16" src="{{ (user$ | async)?.photoURL || 'assets/images/avatars/profile.svg' }}">
I would like it when it was offline, it would search somewhere in the cache for the image that was uploaded.
It would be very annoying every time I load the images to call some method to store the cached image or something, I know it is possible but I do not know how to do that.
Is it possible to do this through the ngsw-config.json configuration file?
ngsw-config.json:
{
"index": "/index.html",
"assetGroups": [
{
"name": "app",
"installMode": "prefetch",
"resources": {
"files": [
"/favicon.ico",
"/index.html",
"/*.css",
"/*.js"
],
"urls": [
"https://fonts.googleapis.com/css?family=Muli:300,400,600,700"
]
}
}, {
"name": "assets",
"installMode": "lazy",
"updateMode": "prefetch",
"resources": {
"files": [
"/assets/**",
"/*.(eot|svg|cur|jpg|png|webp|gif|otf|ttf|woff|woff2|ani)"
]
}
}
]
}
Yes, it's possible, I tried and works for me, I have a pwa with ionic and angular 7, in my 'ngsw-config.json' I used this config:
{
"index": "/index.html",
"assetGroups": [{
"name": "app",
"installMode": "prefetch",
"resources": {
"files": [
"/favicon.ico",
"/index.html",
"/*.css",
"/*.js"
]
}
}, {
"name": "assets",
"installMode": "lazy",
"updateMode": "prefetch",
"resources": {
"files": [
"/assets/**",
"/*.(eot|svg|cur|jpg|png|webp|gif|otf|ttf|woff|woff2|ani)"
]
}
}],
"dataGroups": [{
"name": "api-freshness",
"urls": [
"https://firebasestorage.googleapis.com/v0/b/mysuperrpwapp.appspot.com/"
],
"cacheConfig": {
"maxSize": 100,
"maxAge": "180d",
"timeout": "10s",
"strategy": "freshness"
}
}]
}
In this article is well explained how works and what strategies you can use.
https://medium.com/progressive-web-apps/a-new-angular-service-worker-creating-automatic-progressive-web-apps-part-1-theory-37d7d7647cc7
It was very important in testing to have a valid https connection for the 'service_worker' starts. Once get offline, you can see that the file comes from "service_worker"
Test img _ from service_worker
just do
storage.ref("pics/yourimage.jpg").updateMetatdata({ 'cacheControl': 'private, max-age=15552000' }).subscribe(e=>{ });
and in your ngsw-config.json
"assetGroups": [{
"name": "app",
"installMode": "prefetch",
"resources": {
"files": [
"/favicon.ico",
"/index.html",
"/*.css",
"/*.js"
],
"url":[
"https://firebasestorage.googleapis.com/v0/b/*"
]
}
}
Related
I have cloudformation template. Here we have multiple environments(dev,qa,uat) and need to use same template for all environments.
In template Under "Action": ["iam:PassRole"] there are 4 resources, 3 resources are belongs to qa. When am deploying code on dev and uat Env, qa resources are applying to dev and uat environment as well but I need to create these 3 qa resources only for qa environment. I tried some conditions but isn't working. Is there any approach for this.
Please find below template code.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"CloudFormationRole": {
"Type": "AWS::IAM::Role",
"Description": "Service role in IAM for AWS CloudFormation",
"Properties": {
"RoleName": {
"Fn::Sub": "${Environment}-workflow-CloudFormationRole"
},
"Path": "/",
"Policies": [
{
"PolicyName": "WorkerCloudFormationRolePolicy",
"PolicyDocument": {
"Statement": [
{
"Action": [
"lambda:AddPermission",
"lambda:PutFunctionEventInvokeConfig",
"lambda:UpdateFunctionEventInvokeConfig"
],
"Resource": {
"Fn::Sub": "arn:aws:lambda:function:orderser-${Environment}-workflow-*"
},
"Effect": "Allow"
},
{
"Action": [
"iam:PassRole"
],
"Resource": [
{"Fn::Sub": "arn:aws:iam::role/orderser-workflow-*"},
{"Fn::Sub": "arn:aws:iam::role/orderserv-qa-workflowLambdaRole1"},
{"Fn::Sub": "arn:aws:iam::role/orderserv-qa-workflowLambdaRole2"},
{"Fn::Sub": "arn:aws:iam::role/orderserv-qa-workflowLmbdRole3"}
],
"Effect": "Allow"
}
]
}
}
]
}
}
}
}
Parameterize the pass role ARNs and have different set of parameter files for each environment
And, since you have multiple values, use CommaDelimitedList type parameter which can take multiple string values
Ref here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html
I was searching a way to run ecs task. I already have a cluster and task definition settings. I just wanted to trigger a task using CloudFormation template. I know that I can run a task by clicking on the console and it works fine. For cfn, approach needs to be define properly.
Check the attached screenshots. I wanted to run that task using CloudFormation and pass container override environment variables. As per my current templates, it is not allowing me to do same like I can do using console. Using console I just need to select the following options
1. Launch type
2. Task Definition
Family
Revision
3. VPC and security groups
4. Environment variable overrides rest of the things automatically selected
It starts working with console but with cloudformaton template how can we do that. Is it possible to do or there is no such feature?
"taskdefinition": {
"Type" : "AWS::ECS::TaskDefinition",
"DependsOn": "DatabaseMaster",
"Properties" : {
"ContainerDefinitions" : [{
"Environment" : [
{
"Name" : "TARGET_DATABASE",
"Value" : {"Ref":"DBName"}
},
{
"Name" : "TARGET_HOST",
"Value" : {"Fn::GetAtt": ["DatabaseMaster", "Endpoint.Address"]}
}
]
}],
"ExecutionRoleArn" : "arn:aws:iam::xxxxxxxxxx:role/ecsTaskExecutionRole",
"Family" : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"TaskRoleArn" : "arn:aws:iam::xxxxxxxxxxxxxxx:role/xxxxxxxxxxxxxxx-XXXXXXXXX"
}
},
"EcsService": {
"Type" : "AWS::ECS::Service",
"Properties" : {
"Cluster" : "xxxxxxxxxxxxxxxxx",
"LaunchType" : "FARGATE",
"NetworkConfiguration" : {
"AwsvpcConfiguration" : {
"SecurityGroups" : ["sg-xxxxxxxxxxx"],
"Subnets" : ["subnet-xxxxxxxxxxxxxx"]
}
},
"TaskDefinition" : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
}
There is no validity error in the code however, I am talking about the approach. I added image name container name but now it is asking for memory and cpu, it should not ask as it is already defined we just need to run a task.
Edited
I wanted to run a task after creation of my database and wanted to pass those database values to the task to run and complete a job.
Here is the working example of what you can do if you wanted to pass variable and run a task. In my case, I wanted to run a task after creation of my database but with environment variables, directly AWS does not provide any feature to do so, this is the solution which can help to trigger you ecs task.
"IAMRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"events.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Description": "Allow CloudWatch Events to trigger ECS task",
"Policies": [
{
"PolicyName": "Allow-ECS-Access",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:*",
"iam:PassRole",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
}
],
"RoleName": { "Fn::Join": [ "", ["CloudWatchTriggerECSRole-", { "Ref": "DBInstanceIdentifier" }]]}
}
},
"DummyParameter": {
"Type" : "AWS::SSM::Parameter",
"Properties" : {
"Name" : {"Fn::Sub": "${AWS::StackName}-${DatabaseMaster}-EndpointAddress"},
"Type" : "String",
"Value" : {"Fn::GetAtt": "DatabaseMaster.Endpoint.Address"}
},
"DependsOn": "TaskSchedule"
},
"TaskSchedule": {
"Type": "AWS::Events::Rule",
"Properties": {
"Description": "Trigger ECS task upon creation of DB instance",
"Name": { "Fn::Join": [ "", ["ECSTaskTrigger-", { "Ref": "DBName" }]]},
"RoleArn": {"Fn::GetAtt": "IAMRole.Arn"},
"EventPattern": {
"source": [ "aws.ssm" ],
"detail-type": ["Parameter Store Change"] ,
"resources": [{"Fn::Sub":"arn:aws:ssm:eu-west-1:XXXXXXX:parameter/${AWS::StackName}-${DatabaseMaster}-EndpointAddress"}],
"detail": {
"operation": ["Create"],
"name": [{"Fn::Sub": "${AWS::StackName}-${DatabaseMaster}-EndpointAddress"}],
"type": ["String"]
}
},
"State": "ENABLED",
"Targets": [
{
"Arn": "arn:aws:ecs:eu-west-1:xxxxxxxx:cluster/NameOf-demo",
"Id": "NameOf-demo",
"RoleArn": {"Fn::GetAtt": "IAMRole.Arn"},
"EcsParameters": {
"LaunchType": "FARGATE",
"NetworkConfiguration": {
"AwsVpcConfiguration": {
"SecurityGroups": {"Ref":"VPCSecurityGroups"},
"Subnets": {"Ref":"DBSubnetName"}
}
},
"PlatformVersion": "LATEST",
"TaskDefinitionArn": "arn:aws:ecs:eu-west-1:XXXXXXXX:task-definition/NameXXXXXXXXX:1"
},
"Input": {"Fn::Sub": [
"{\"containerOverrides\":[{\"name\":\"MyContainerName\",\"environment\":[{\"name\":\"VAR1\",\"value\":\"${TargetDatabase}\"},{\"name\":\"VAR2\",\"value\":\"${TargetHost}\"},{\"name\":\"VAR3\",\"value\":\"${TargetHostPassword}\"},{\"name\":\"VAR4\",\"value\":\"${TargetPort}\"},{\"name\":\"VAR5\",\"value\":\"${TargetUser}\"},{\"name\":\"VAR6\",\"value\":\"${TargetLocation}\"},{\"name\":\"VAR7\",\"value\":\"${TargetRegion}\"}]}]}",
{
"VAR1": {"Ref":"DBName"},
"VAR2": {"Fn::GetAtt": ["DatabaseMaster", "Endpoint.Address"]},
"VAR3": {"Ref":"DBPassword"},
"VAR4": "5432",
"VAR5": {"Ref":"DBUser"},
"VAR6": "value6",
"VAR7": "eu-west-2"
}
]}
}
]
}
}
For Fargate task, we need to specify in CPU in Task Definition. and memory or memory reservation in either task or container definition.
and environment variables should be passed to each container as ContainerDefinitions and overrided when task is run from ecs task-run from console or cli.
{
"ContainerTaskdefinition": {
"Type": "AWS::ECS::TaskDefinition",
"Properties": {
"Family": "SomeFamily",
"ExecutionRoleArn": !Ref RoleArn,
"TaskRoleArn": !Ref TaskRoleArn,
"Cpu": "256",
"Memory": "1GB",
"NetworkMode": "awsvpc",
"RequiresCompatibilities": [
"EC2",
"FARGATE"
],
"ContainerDefinitions": [
{
"Name": "container name",
"Cpu": 256,
"Essential": "true",
"Image": !Ref EcsImage,
"Memory": "1024",
"LogConfiguration": {
"LogDriver": "awslogs",
"Options": {
"awslogs-group": null,
"awslogs-region": null,
"awslogs-stream-prefix": "ecs"
}
},
"Environment": [
{
"Name": "ENV_ONE_KEY",
"Value": "Valu1"
},
{
"Name": "ENV_TWO_KEY",
"Value": "Valu2"
}
]
}
]
}
}
}
EDIT(from discussion in comments):
ECS Task Run is not a cloud-formation resource, it can only be run from console or CLI.
But if we choose to run from a cloudformation resource, it can be done using cloudformation custom resource. But once task ends, we now have a resource in cloudformation without an actual resource behind. So, custom resource needs to do:
on create: run the task.
on delete: do nothing.
on update: re-run the task
Force an update by changing an attribute or logical id, every time we need to run the task.
I tried to get the example from https://learn.microsoft.com/en-us/microsoftteams/platform/get-started/get-started-dotnet to run.
I successfully built and published (on Azure) the webapp.
Then I tried
- manually editing the json file in the visual studio solution similarly to what the docs said, and then import the resulting zip in App Studio
- adding the App directly in App Studio through the manifest editor (this works, but it fails when installing/testing it)
Both give me an error without any more info "something went wrong".
Any way to actually figure out what I did wrong (if anything)?
Either way, maybe you guys can figure it out from the json file's contents:
{
"$schema": "https://statics.teams.microsoft.com/sdk/v1.0/manifest/MicrosoftTeams.schema.json",
"manifestVersion": "1.0",
"version": "1.0.0",
"id": " 1CC58D17-1E95-443C-958F-E1F14D4CA3B4",
"packageName": "com.contoso.helloworld",
"developer": {
"name": "Contoso",
"websiteUrl": "https://www.microsoft.com",
"privacyUrl": "https://www.microsoft.com/privacy",
"termsOfUseUrl": "https://www.microsoft.com/termsofuse"
},
"name": {
"short": "Hello World",
"full": "Hello World App for Microsoft Teams"
},
"description": {
"short": "Hello World App for Microsoft Teams",
"full": "This sample app provides a very simple app for Microsoft Teams. You can extend this to add more content and capabilities."
},
"icons": {
"outline": "contoso20x20.png",
"color": "contoso96x96.png"
},
"accentColor": "#60A18E",
"staticTabs": [
{
"entityId": "com.contoso.helloworld.hellotab",
"name": "Hello Tab",
"contentUrl": "https://microsoftteamssampleshelloworldweb20181022032046.azurewebsites.net/hello",
"scopes": [
"personal"
]
}
],
"configurableTabs": [
{
"configurationUrl": "https://microsoftteamssampleshelloworldweb20181022032046.azurewebsites.net/configure",
"canUpdateConfiguration": true,
"scopes": [
"team"
]
}
],
"bots": [
{
"botId": "00000000-0000-0000-0000-000000000000",
"needsChannelSelector": false,
"isNotificationOnly": false,
"scopes": [
"team",
"personal"
]
}
],
"composeExtensions": [
{
"botId": "00000000-0000-0000-0000-000000000000",
"scopes": [
"personal",
"team"
],
"commands": [
{
"id": "getRandomText",
"description": "Gets some random text and images that you can insert in messages for fun.",
"title": "Get some random text for fun",
"initialRun": true,
"parameters": [
{
"name": "cardTitle",
"description": "Card title to use",
"title": "Card title"
}
]
}
]
}
],
"permissions": [],
"validDomains": []
}
Any suggestions?
There is a whitespace in your app Id in shared manifes:
"id": " 1CC58D17-1E95-443C-958F-E1F14D4CA3B4"
Could you please remove it and try and let us know if it works? Also you can remove bots and composeExtensions section if you want.
I can't comment yet (not enough reputation points) but could you go through the instructions again?
I believe something went wrong with either the GUID, or one of the URL's. The instructions also advise to use ngrok, which is usefull for debugging.
If you can't find a clear error message, i advise you follow those instructions.
When using the "Open Folder" functionality of Visual Studio, the IDE searches for project settings and configurations in a special json file. For CPP projects, this could be CppProperties.json. For CMake projects, this could be CMakeSettings.json.
This json file contains a collection of one or more "configurations," such as "Debug" or "Release". I will use a recent CMake project as an example:
"configurations": [
{
"name": "ARM-Debug",
"generator": "Ninja",
"configurationType": "Debug",
"inheritEnvironments": [
"gcc-arm"
],
"buildRoot": "${env.USERPROFILE}\\CMakeBuilds\\${workspaceHash}\\build\\${name}",
"installRoot": "${env.USERPROFILE}\\CMakeBuilds\\${workspaceHash}\\install\\${name}",
"cmakeCommandArgs": "",
"buildCommandArgs": "-v",
"ctestCommandArgs": "",
"intelliSenseMode": "linux-gcc-arm",
"variables": [
{
"name": "CMAKE_TOOLCHAIN_FILE",
"value": "${workspaceRoot}/cmake/arm-none-eabi-toolchain.cmake"
}
]
},
{
"name": "ARM-Release",
"generator": "Ninja",
"configurationType": "Release",
"inheritEnvironments": [
"gcc-arm"
],
"buildRoot": "${env.USERPROFILE}\\CMakeBuilds\\${workspaceHash}\\build\\${name}",
"installRoot": "${env.USERPROFILE}\\CMakeBuilds\\${workspaceHash}\\install\\${name}",
"cmakeCommandArgs": "",
"buildCommandArgs": "-v",
"ctestCommandArgs": "",
"intelliSenseMode": "linux-gcc-arm",
"variables": [
{
"name": "CMAKE_TOOLCHAIN_FILE",
"value": "${workspaceRoot}/cmake/arm-none-eabi-toolchain.cmake"
}
]
}
As you can see, I have two configurations with nearly identical properties.
My question: is it possible to define these common/shared properties once, in such a way as to allow the configurations to inherit them and avoid repeating myself?
The easier way is to define an environment at global level (outside of any configuration), such as:
{
"environments": [
{
"namespace" : "env",
"varName": "varValue"
}
],
Then you can reuse that wherever you need to, e.g.:
"cmakeCommandArgs": "${env.varName}",
You can also have multiple environments, and reuse them, like this:
{
"environments": [
{
"environment": "env1",
"namespace": "env",
"varName": "varValueEnv1"
},
{
"environment": "env2",
"namespace": "env",
"varName": "varValueEnv2"
}
],
"configurations": [
{
"name": "x64-Release",
"inheritEnvironments": [
"msvc_x64_x64", "env2"
],
"cmakeCommandArgs": "${env.varName}",
.....
}
]
the 'x64-Release' will inherit the variables's value in the environment called "env2" (namespace 'env')
I created a Azure Template for an alert, because i want to upload the script (.json) with the new microservice the same time. But if I deploy this .json file it creates a new storage, not an alert. I used the Powershell commands New-AzureRmResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup -TemplateFile c:\MyTemplates\storage.json -storageAccountType Standard_GRS. In my template i need to define the parameter kind, which is only acceptable with a value of Storage or Blobstorage, but i want non of these two. So how can i create an alert by using a script .json file and does anybody have a template, because MS isn't providing the correct one.
EDIT: Here is the .json file:
{
"$schema":
"http://schema.management.azure.com/schemas/2015-01-
01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"resources": [
{
"name": "[concat('storage', uniqueString(resourceGroup().id))]",
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2016-01-01",
"sku": {
"name": "Standard_LRS"
},
"kind": "Storage",
"id":
"/subscriptions/subscriptionID/resourceGroups/resourceGroupName/providers/Microsoft.Storage/storageAccounts/storageName",
"location": "westeurope",
"properties": {
"name": "tryAgain",
"description": null,
"isEnabled": true,
"condition": {
"$type":
"Microsoft.WindowsAzure.Management.Monitoring.Alerts.Models.ThresholdRuleCondition, Microsoft.WindowsAzure.Management.Mon.Client",
"odata.type":
"Microsoft.Azure.Management.Insights.Models.ThresholdRuleCondition",
"dataSource": {
"$type":
"Microsoft.WindowsAzure.Management.Monitoring.Alerts.Models.RuleMetricDataSource, Microsoft.WindowsAzure.Management.Mon.Client",
"odata.type":
"Microsoft.Azure.Management.Insights.Models.RuleMetricDataSource",
"resourceUri":
"/subscriptions/subscriptionID/resourcegroups/resourceGroupName/providers/microsoft.web/sites/name",
"resourceLocation": null,
"metricNamespace": null,
"metricName": "AverageMemoryWorkingSet",
"legacyResourceId": null
},
"operator": "GreaterThanOrEqual",
"threshold": 120000000,
"windowSize": "PT10M",
"timeAggregation": "Average"
},
"actions": [
{
"$type":
"Microsoft.WindowsAzure.Management.Monitoring.Alerts.Models.RuleWebhookAction, Microsoft.WindowsAzure.Management.Mon.Client",
"odata.type":
"Microsoft.Azure.Management.Insights.Models.RuleWebhookAction",
"serviceUri":
"Logic-app URL",
"properties": {
"$type":
"Microsoft.WindowsAzure.Management.Common.Storage.CasePreservedDictionary`1[[System.String, mscorlib]], Microsoft.WindowsAzure.Management.Common.Storage",
"logicAppResourceId":
"/subscriptions/subscriptionID/resourceGroups/Default-Storage-WestEurope/providers/Microsoft.Logic/workflows/Microsoft-Teams-Notifier"
}
}
]
}
}
]
}
Please refer to the following reference for creating a metric alert via Azure Resource Manager template. If you want to create a single ARM template which creates a storage account and then a metric alert to monitor the created storage account, you should make sure you have a dependsOn so that the alert rule is only created after the storage account. The following document references the newer metric alerts, as opposed to the classic metric alerts.
https://learn.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-create-metric-alerts-with-templates