Can't access any file in subdirectories with Firebase Hosting - html

I have a site hosted in Firebase Hosting and it works fine when I just load content from the current public folder. But I can't load anything from a subdir like /img/logo.png.
I've searched here and over the internet, but found no working solution. Some similar questions I tried:
Images not showing up in hosted site
How to include subdirectories in firebase hosting
My includes are like <script src="js/index.js"></script>
My firebase.json:
{
"database": {
"rules": "database.rules.json"
},
"hosting": {
"public": "public",
"ignore": [
"firebase.json",
"**/.*",
"**/node_modules/**"
],
"headers": [ {
"source" : "**/*.#(eot|otf|ttf|ttc|woff|font.css)",
"headers" : [ {
"key" : "Access-Control-Allow-Origin",
"value" : "*"
} ]
}, {
"source" : "**/*.#(jpg|jpeg|gif|png)",
"headers" : [ {
"key" : "Cache-Control",
"value" : "max-age=7200"
} ]
}, {
"source" : "404.html",
"headers" : [ {
"key" : "Cache-Control",
"value" : "max-age=300"
} ]
} ]
}
}

Solved it by running firebase deploy inside Bash on Ubuntu on Windows.
Previously, I was using firebase cli on plain Windows 10.

Related

How to run AWS ECS Task with CloudFormation overriding container environment variables

I was searching a way to run ecs task. I already have a cluster and task definition settings. I just wanted to trigger a task using CloudFormation template. I know that I can run a task by clicking on the console and it works fine. For cfn, approach needs to be define properly.
Check the attached screenshots. I wanted to run that task using CloudFormation and pass container override environment variables. As per my current templates, it is not allowing me to do same like I can do using console. Using console I just need to select the following options
1. Launch type
2. Task Definition
Family
Revision
3. VPC and security groups
4. Environment variable overrides rest of the things automatically selected
It starts working with console but with cloudformaton template how can we do that. Is it possible to do or there is no such feature?
"taskdefinition": {
"Type" : "AWS::ECS::TaskDefinition",
"DependsOn": "DatabaseMaster",
"Properties" : {
"ContainerDefinitions" : [{
"Environment" : [
{
"Name" : "TARGET_DATABASE",
"Value" : {"Ref":"DBName"}
},
{
"Name" : "TARGET_HOST",
"Value" : {"Fn::GetAtt": ["DatabaseMaster", "Endpoint.Address"]}
}
]
}],
"ExecutionRoleArn" : "arn:aws:iam::xxxxxxxxxx:role/ecsTaskExecutionRole",
"Family" : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"TaskRoleArn" : "arn:aws:iam::xxxxxxxxxxxxxxx:role/xxxxxxxxxxxxxxx-XXXXXXXXX"
}
},
"EcsService": {
"Type" : "AWS::ECS::Service",
"Properties" : {
"Cluster" : "xxxxxxxxxxxxxxxxx",
"LaunchType" : "FARGATE",
"NetworkConfiguration" : {
"AwsvpcConfiguration" : {
"SecurityGroups" : ["sg-xxxxxxxxxxx"],
"Subnets" : ["subnet-xxxxxxxxxxxxxx"]
}
},
"TaskDefinition" : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
}
There is no validity error in the code however, I am talking about the approach. I added image name container name but now it is asking for memory and cpu, it should not ask as it is already defined we just need to run a task.
Edited
I wanted to run a task after creation of my database and wanted to pass those database values to the task to run and complete a job.
Here is the working example of what you can do if you wanted to pass variable and run a task. In my case, I wanted to run a task after creation of my database but with environment variables, directly AWS does not provide any feature to do so, this is the solution which can help to trigger you ecs task.
"IAMRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"events.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Description": "Allow CloudWatch Events to trigger ECS task",
"Policies": [
{
"PolicyName": "Allow-ECS-Access",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:*",
"iam:PassRole",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
}
],
"RoleName": { "Fn::Join": [ "", ["CloudWatchTriggerECSRole-", { "Ref": "DBInstanceIdentifier" }]]}
}
},
"DummyParameter": {
"Type" : "AWS::SSM::Parameter",
"Properties" : {
"Name" : {"Fn::Sub": "${AWS::StackName}-${DatabaseMaster}-EndpointAddress"},
"Type" : "String",
"Value" : {"Fn::GetAtt": "DatabaseMaster.Endpoint.Address"}
},
"DependsOn": "TaskSchedule"
},
"TaskSchedule": {
"Type": "AWS::Events::Rule",
"Properties": {
"Description": "Trigger ECS task upon creation of DB instance",
"Name": { "Fn::Join": [ "", ["ECSTaskTrigger-", { "Ref": "DBName" }]]},
"RoleArn": {"Fn::GetAtt": "IAMRole.Arn"},
"EventPattern": {
"source": [ "aws.ssm" ],
"detail-type": ["Parameter Store Change"] ,
"resources": [{"Fn::Sub":"arn:aws:ssm:eu-west-1:XXXXXXX:parameter/${AWS::StackName}-${DatabaseMaster}-EndpointAddress"}],
"detail": {
"operation": ["Create"],
"name": [{"Fn::Sub": "${AWS::StackName}-${DatabaseMaster}-EndpointAddress"}],
"type": ["String"]
}
},
"State": "ENABLED",
"Targets": [
{
"Arn": "arn:aws:ecs:eu-west-1:xxxxxxxx:cluster/NameOf-demo",
"Id": "NameOf-demo",
"RoleArn": {"Fn::GetAtt": "IAMRole.Arn"},
"EcsParameters": {
"LaunchType": "FARGATE",
"NetworkConfiguration": {
"AwsVpcConfiguration": {
"SecurityGroups": {"Ref":"VPCSecurityGroups"},
"Subnets": {"Ref":"DBSubnetName"}
}
},
"PlatformVersion": "LATEST",
"TaskDefinitionArn": "arn:aws:ecs:eu-west-1:XXXXXXXX:task-definition/NameXXXXXXXXX:1"
},
"Input": {"Fn::Sub": [
"{\"containerOverrides\":[{\"name\":\"MyContainerName\",\"environment\":[{\"name\":\"VAR1\",\"value\":\"${TargetDatabase}\"},{\"name\":\"VAR2\",\"value\":\"${TargetHost}\"},{\"name\":\"VAR3\",\"value\":\"${TargetHostPassword}\"},{\"name\":\"VAR4\",\"value\":\"${TargetPort}\"},{\"name\":\"VAR5\",\"value\":\"${TargetUser}\"},{\"name\":\"VAR6\",\"value\":\"${TargetLocation}\"},{\"name\":\"VAR7\",\"value\":\"${TargetRegion}\"}]}]}",
{
"VAR1": {"Ref":"DBName"},
"VAR2": {"Fn::GetAtt": ["DatabaseMaster", "Endpoint.Address"]},
"VAR3": {"Ref":"DBPassword"},
"VAR4": "5432",
"VAR5": {"Ref":"DBUser"},
"VAR6": "value6",
"VAR7": "eu-west-2"
}
]}
}
]
}
}
For Fargate task, we need to specify in CPU in Task Definition. and memory or memory reservation in either task or container definition.
and environment variables should be passed to each container as ContainerDefinitions and overrided when task is run from ecs task-run from console or cli.
{
"ContainerTaskdefinition": {
"Type": "AWS::ECS::TaskDefinition",
"Properties": {
"Family": "SomeFamily",
"ExecutionRoleArn": !Ref RoleArn,
"TaskRoleArn": !Ref TaskRoleArn,
"Cpu": "256",
"Memory": "1GB",
"NetworkMode": "awsvpc",
"RequiresCompatibilities": [
"EC2",
"FARGATE"
],
"ContainerDefinitions": [
{
"Name": "container name",
"Cpu": 256,
"Essential": "true",
"Image": !Ref EcsImage,
"Memory": "1024",
"LogConfiguration": {
"LogDriver": "awslogs",
"Options": {
"awslogs-group": null,
"awslogs-region": null,
"awslogs-stream-prefix": "ecs"
}
},
"Environment": [
{
"Name": "ENV_ONE_KEY",
"Value": "Valu1"
},
{
"Name": "ENV_TWO_KEY",
"Value": "Valu2"
}
]
}
]
}
}
}
EDIT(from discussion in comments):
ECS Task Run is not a cloud-formation resource, it can only be run from console or CLI.
But if we choose to run from a cloudformation resource, it can be done using cloudformation custom resource. But once task ends, we now have a resource in cloudformation without an actual resource behind. So, custom resource needs to do:
on create: run the task.
on delete: do nothing.
on update: re-run the task
Force an update by changing an attribute or logical id, every time we need to run the task.

VSCode Linux tasks.json MQL4 Compilation

I am switching to VSCode from MetaEditor to develop for MetaTrader4.
I'm using MetaTrader4 and MetaEditor in Linux via Wine.
(and MetaEditor runs quite terribly in Wine)
I would like to create a task to compile the code, and hopefully return the same error log to VSCode to further debug the code as if I was using MetaEditor.
I've used this post to figure out what CLI command has been used to compile MQL4:
Compiling MQL4 via command line through wine metaeditor.exe
/usr/bin/wine /path/to/MT4/metaeditor.exe /compile:"Z:\path\to\MT4\MQL4\Experts\Foo\Bar_EA.mq4" /include:"Z:\path\to\MT4\MQL4" /log
My issue is that I don't understand and cannot find any resource that explains what the "commands" inside the tasks.json file does or list of available variables. Like "/include:" or "presentation":, ${file}, etc.
So I took some guesses and I pieced it together to look something like this so far:
{
"version": "2.0.0",
"tasks": [
{
"label": "MQL4 Compile",
"type": "shell",
"command": "/usr/bin/wine /.wine/drive_c/Program Files (x86)/FXChoice MetaTrader 4/metaeditor.exe",
"args": [
"/compile:${file}"
]
}
]
}
Its probably not quite right.
I appreciate your help, thank you
{
"version": "2.0.0",
"tasks":
[
{
"label": "MQL4-Compile",
"group":
{
"kind" : "build",
"isDefault" : true
},
"presentation":
{
"echo" : true,
"reveal": "always",
"focus" : true,
"panel" : "shared"
},
"promptOnClose" : true,
"type" : "process",
"osx" :
{
"command" : "wine",
"args" :
[
"/Users/SVG/.wine/drive_c/Program Files/MetaTrader/metaeditor.exe",
"/compile:${fileBasename}",
"/log:${fileBasenameNoExtension}.log",
]
},
"windows" :
{
"command" : "C:\\Program Files (x86)\\MetaTrader\\metaeditor.exe",
"args" :
[
"/compile:${fileBasename}",
"/log:${fileBasenameNoExtension}.log",
]
},
}
]
}

How to add Leverage browser caching to firebase.json

I'm using Firebase on Google Cloud Platform for the first time and I've uploaded my static website but now I'd like to add:
"headers": [ {
"source" : "**/*.#(eot|otf|ttf|ttc|woff|font.css)",
"headers" : [ {
"key" : "Access-Control-Allow-Origin",
"value" : "*"
} ]
}, {
"source" : "**/*.#(jpg|jpeg|gif|png)",
"headers" : [ {
"key" : "Cache-Control",
"value" : "max-age=7200"
} ]
}, {
// Sets the cache header for 404 pages to cache for 5 minutes
"source" : "404.html",
"headers" : [ {
"key" : "Cache-Control",
"value" : "max-age=300"
} ]
} ]
to enable Leverage browser caching, but I do not understand how to add these lines of code to the firebase.json file?
The firebase init command creates a firebase.json settings file in the root of your project's directory, but how can I change it after I've created the site?
Thanks a lot
If you want to change the caching settings on your web site, change the relevant Change-Control header in your firebase.json and then rerun firebase deploy. This will deploy the latest firebase.json with your new settings, and ensure that all HTML/CSS/JS/etc files are up to date too.
If you lost your firebase.json, a simple default could look something like this from the Firebase Hosting reference documentation:
{
"hosting": {
"public": "app",
"ignore": [
"firebase.json",
"**/.*",
"**/node_modules/**"
]
}
}

Polymer - How to put 2 url to registry -> search

This is the bowerrc file. I want to search in 2 urls for the polymer components. Is it possible to do that? Neither of the url is local.
{
"registry": {
"search": [
"url1",
"url2"
]
},
"strict-ssl" : false,
"resolvers" : [
"bower-art-resolver"
],
}

Fiware: No observation attributes in Orion CB when registered/sent via IDAS UltraLight

This question is very similar to Missing attributes on Orion CB Entity when registering device through IDAS but found no definitive answer there.
I have been trying FiWare to get UL2.0 via IDAS to the Orion CB working in the Fiware-Lab env:
using latest GitHub
https://github.com/telefonicaid/fiware-figway/tree/master/python-IDAS4
scripts
following the tutorials in particular
http://www.slideshare.net/FI-WARE/fiware-iotidasintroul20v2
I have a FI-WARE Lab account with token generated. Adapted the config.ini file:
[user]
# Please, configure here your username at FIWARE Cloud and a valid Oauth2.0 TOKEN for your user (you can use get_token.py to obtain a valid TOKEN).
username=MY_USERNAME
token=MY_TOKEN
[contextbroker]
host=130.206.80.40
port=1026
OAuth=no
# Here you need to specify the ContextBroker database you are querying.
# Leave it blank if you want the general database or the IDAS service if you are looking for IoT devices connected by you.
# fiware_service=
fiware_service=bus_auto
fiware-service-path=/
[idas]
host=130.206.80.40
adminport=5371
ul20port=5371
OAuth=no
# Here you need to configure the IDAS service your devices will be sending data to.
# By default the OpenIoT service is provided.
# fiware-service=fiwareiot
fiware-service=bus_auto
fiware-service-path=/
#apikey=4jggokgpepnvsb2uv4s40d59ov
apikey=4jggokgpepnvsb2uv4s40d59ov
[local]
#Choose here your System type. Examples: RaspberryPI, MACOSX, Linux, ...
host_type=MACOSX
# Here please add a unique identifier for you. Suggestion: the 3 lower hexa bytes of your Ethernet MAC. E.g. 79:ed:af
# Also you may use your e-mail address.
host_id=a0:11:00
I used the SENSOR_TEMP template, adding the 'protocol' field (PDI-IoTA-UltraLight which as the first problem I stumbled upon):
{
"devices": [
{ "device_id": "DEV_ID",
"entity_name": "ENTITY_ID",
"entity_type": "thing",
"protocol": "PDI-IoTA-UltraLight",
"timezone": "Europe/Amsterdam",
"attributes": [
{ "object_id": "otemp",
"name": "temperature",
"type": "int"
} ],
"static_attributes": [
{ "name": "att_name",
"type": "string",
"value": "value"
}
]
}
]
}
Now I can Register the device ok. Like
python RegisterDevice.py SENSOR_TEMP NexusPro Temp-Otterlo
and see it in Device List:
python ListDevices.py
I can send Observations like
python SendObservation.py Temp-Otterlo 'otemp|17'
But in the ContextBroker I see the Entity but never the measurements, e.g.
python GetEntity.py Temp-Otterlo
Gives
* Asking to http://130.206.80.40:1026/ngsi10/queryContext
* Headers: {'Fiware-Service': 'bus_auto', 'content-type': 'application/json', 'accept': 'application/json', 'X-Auth-Token': 'NULL'}
* Sending PAYLOAD:
{
"entities": [
{
"type": "",
"id": "Temp-Otterlo",
"isPattern": "false"
}
],
"attributes": []
}
...
* Status Code: 200
* Response:
{
"contextResponses" : [
{
"contextElement" : {
"type" : "thing",
"isPattern" : "false",
"id" : "Temp-Otterlo",
"attributes" : [
{
"name" : "TimeInstant",
"type" : "ISO8601",
"value" : "2015-10-03T14:04:44.663133Z"
},
{
"name" : "att_name",
"type" : "string",
"value" : "value",
"metadatas" : [
{
"name" : "TimeInstant",
"type" : "ISO8601",
"value" : "2015-10-03T14:04:44.663500Z"
}
]
}
]
},
"statusCode" : {
"code" : "200",
"reasonPhrase" : "OK"
}
}
]
}
I get an TimeInstant attribute strangely. I tried playing with settings of the .ini like fiware-service=fiwareiot, but to no avail. I am out of ideas. The documentation at the catalogue. for IDAS4
is talking about observations to be sent to port 8002 and setting "OpenIoT" service, but that failed as well.
Any help appreciated.
You should run "python SendObservation.py NexusPro 'otemp|17'" instead of "python SendObservation.py Temp-Otterlo 'otemp|17'".
The reason is that you are providing an observation at the southbound and then, the DEV_ID should be used.
The entity does not include an attribute until an observation is received so then it is normal you are not able to see it. Once you try the one above it should all work.
Cheers,