I've been trying to set up a schedule for a pub-sub function, depending on the environment I want to deploy, the process I'm following is simply adding those cron string parameters as environment variables.
.env.dev:
CRON_4H=every 4 hours
CRON_12H=every 12 hours
CRON_24H=every 24 hours
And use that parameter on the function definition
exports.myfunction = functions.pubsub.schedule(process.env.CRON_12H)
.onRun(context => {
// ETC
});
I'm getting the following error:
Cannot create a scheduler job without a schedule
{
"id": "myfunction",
"project": "----",
"region": "----",
"entryPoint": "----",
"platform": "gcfv1",
"runtime": "nodejs16",
"scheduleTrigger": {
"schedule": "",
"timeZone": null
},
"labels": {
"deployment-tool": "cli-firebase"
},
"availableMemoryMb": 512,
"timeoutSeconds": 300,
"environmentVariables": {
"MIN_INSTANCES_1": "0",
"MIN_INSTANCES_3": "0",
"MIN_INSTANCES_5": "0",
CRON_4H=every 4 hours,
CRON_12H=every 12 hours,
CRON_24H=every 24 hours,
"FIREBASE_CONFIG": "--------",
"codebase": "default",
"securityLevel": "SECURE_ALWAYS"
}
it seems like the env vars are empty by the time of deployment, schedule and timezone indeed are empty, any advice on this?
EDIT:
seems like I'm setting runtime env vars, how can I define build environment vars for a firebase function?
I couldn't find a way to set "build" variables, the only var I can use at that time was the project ID based on the firebase config which seems available during "build", so I changed the approach to the following:
In its own JS file to be imported:
let envProjectId = ''
if (process.env && process.env.FIREBASE_CONFIG) {
envProjectId = JSON.parse(process.env.FIREBASE_CONFIG).projectId
}
exports.instances =
envProjectId !== 'prod-projectI-id'
? {
MIN_INSTANCES_1: 0,
MIN_INSTANCES_3: 0,
MIN_INSTANCES_5: 0,
}
: {
MIN_INSTANCES_1: 1,
MIN_INSTANCES_3: 3,
MIN_INSTANCES_5: 5,
}
it's not as elegant as putting all those vars into a proper env file but worked for me.
Related
I'm trying to run decentralized-model locally. I've managed to deploy:
Link contract
AggregatorProxy
FluxAggregator
Consumer contract
Oracle node (offchain)
External adapters (coingecko + coinapi)
I'm mainly struggling for the last piece which is creating a Job which uses the FluxMonitor initiator.
I've created the following job where "0x5379A65A620aEb405C5C5338bA1767AcB48d6750" is the address of FluxAggregator contract
{
"initiators": [
{
"type": "fluxmonitor",
"params": {
"address": "0x5379A65A620aEb405C5C5338bA1767AcB48d6750",
"requestData": {
"data": {
"from": "ETH",
"to": "USD"
}
},
"feeds": [
{
"bridge": "coinapi_cl_ea"
},
{
"bridge": "coingecko_cl_ea"
}
],
"threshold": 1,
"absoluteThreshold": 1,
"precision": 8,
"pollTimer": {
"period": "15m0s"
},
"idleTimer": {
"duration": "1h0m0s"
}
}
}
],
"tasks": [
{
"type": "NoOp"
}
]
}
Unfortunately, it doesn't work, it makes my local ganache fail with this error "Error: The nonce generation function failed, or the private key was invalid"
I've put my Ganache in debug mode in order to log requests to the blockchain. Noticed the following call
eth_call
{
"jsonrpc": "2.0",
"id": 28,
"method": "eth_call",
"params": [
{
"data": "0xfeaf968c",
"from": "0x0000000000000000000000000000000000000000",
"to": "0x5379a65a620aeb405c5c5338ba1767acb48d6750"
},
"latest"
]
}
the signature of the function is correct
"latestRoundData()": "feaf968c"
However , what seems weird is that the from address is "0x0" . Any idea why my Oracle node doesn't use its key to sign the transaction?
thanks a lot
Problem from Ganache. In fact , I wrote a truffle script which:
calls "latestRoundData()" populating the "FROM" with a valid address
calls "latestRoundData()" populating the "FROM" with a 0x0 address
Then I ran the script 2 times:
Connecting to Ganache-cli --> 1st call is successful while the 2nd call fails
Connecting to Kovan testnet --> both calls are successful
I've just opened an issue for ganache-cli team: https://github.com/trufflesuite/ganache-cli/issues/840
Is it possible to create a loop in aws step function and loop through json input array?
I have a function generateEmails that creates array with n number of objects:
{
"emails": [
{
"to": [
"willow1#aaa.co.uk"
]
},
{
"to": [
"willow2#aaa.co.uk"
]
}, {
"to": [
"willow3#aaa.co.uk"
]
}
]
}
and now I want to call next function sendEmail for each object in emails array with something like this:
{
"email": {
"to": [
"willow#aaa.co.uk"
]
}
}
step function code:
{
"Comment": "A state machine that prepares and sends confirmation email ",
"StartAt": "generateEmails",
"States": {
"generateEmails": {
"Type": "Task",
"Resource": "arn:aws:lambda::prepare-confirmation-email",
"Next": "sendEmail"
},
"sendEmail": {
"Type": "Task",
"Resource": "arn:aws:lambda::function:template-service",
"End" : true
}
}
}
Is that possible to achieve?
Thanks!
Yes, the Step Functions Map state makes this easy.
https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-map-state.html
Map allows you to run the same set of operations to each item in an array. If you set the MaxConcurrency field to something larger than 1, then it will do these in parallel. Or you can set to 1 and it will iterate through sequentially.
For the scenario you described, the number of items will probably mean that "Inline" Map will work just fine. But if that list is larger and you want to fan out to higher concurrency, the recently launched Distributed Map feature will let you do so.
https://aws.amazon.com/blogs/aws/step-functions-distributed-map-a-serverless-solution-for-large-scale-parallel-data-processing/
making an API GET cal I get the following JSON structure:
{
"metadata": {
"grand_total_entities": 231,
"total_entities": 0,
"count": 231
},
"entities": [
{
"allow_live_migrate": true,
"gpus_assigned": false,
"ha_priority": 0,
"memory_mb": 1024,
"name": "test-ansible2",
"num_cores_per_vcpu": 2,
"num_vcpus": 1,
"power_state": "off",
"timezone": "UTC",
"uuid": "e1aff9d4-c834-4515-8c08-235d1674a47b",
"vm_features": {
"AGENT_VM": false
},
"vm_logical_timestamp": 1
},
{
"allow_live_migrate": true,
"gpus_assigned": false,
"ha_priority": 0,
"memory_mb": 1024,
"name": "test-ansible1",
"num_cores_per_vcpu": 1,
"num_vcpus": 1,
"power_state": "off",
"timezone": "UTC",
"uuid": "4b3b315e-f313-43bb-941b-03c298937b4d",
"vm_features": {
"AGENT_VM": false
},
"vm_logical_timestamp": 1
},
{
"allow_live_migrate": true,
"gpus_assigned": false,
"ha_priority": 0,
"memory_mb": 4096,
"name": "test",
"num_cores_per_vcpu": 1,
"num_vcpus": 2,
"power_state": "off",
"timezone": "UTC",
"uuid": "fbe9a1ac-cf45-4efa-9d65-b3257548a9f4",
"vm_features": {
"AGENT_VM": false
},
"vm_logical_timestamp": 17
},
]
}
In my Ansible playbook I register a variable holding this content.
I need to get a list of UUID of "test-ansible1" and "test-ansible2" but I'm having a hard time finding the best way to to this.
Note that I have another variable holding the list of names for which I need to lookup the UUID.
The need is to use those UUIDs to fire a poweron command for all UUIDs corresponding to specific names.
How would you guys do that?
I've taken a number of approaches but I can't seem to get what I want so I prefer an uninfluenced opinion.
P.S.: This is what Nutanix AHV returns as a get of all vms thgough APIs. There seems to me no way to get only specific VMs JSON information but only all VMs.
Thanks.
Here is some Jinja2 magic for you:
- debug:
msg: "{{ mynames | map('extract', dict(test_json | json_query('entities[].[name,uuid]'))) | list }}"
vars:
mynames:
- test-ansible1
- test-ansible2
Explanation:
test_json | json_query('entities[].[name,uuid]') reduces your original json data to a list of elements which are lists of two items – name value and uuid value:
[
[
"test-ansible2",
"e1aff9d4-c834-4515-8c08-235d1674a47b"
],
[
"test-ansible1",
"4b3b315e-f313-43bb-941b-03c298937b4d"
],
[
"test",
"fbe9a1ac-cf45-4efa-9d65-b3257548a9f4"
]
]
BTW you can use http://jmespath.org/ to test query statements.
dict(...) when applied to such structure (list of "touples") generates a dictionary:
{
"test": "fbe9a1ac-cf45-4efa-9d65-b3257548a9f4",
"test-ansible1": "4b3b315e-f313-43bb-941b-03c298937b4d",
"test-ansible2": "e1aff9d4-c834-4515-8c08-235d1674a47b"
}
Then we apply extract filter as per documentation to fetch only required elements:
[
"4b3b315e-f313-43bb-941b-03c298937b4d",
"e1aff9d4-c834-4515-8c08-235d1674a47b"
]
In Azure Data Factory, I’m trying to call an Azure Machine Learning model by a Data Factory Pipeline. I want to use a Azure SQL table as input and another Azure SQL table for the output.
First I deployed a Machine Learning (classic) web service. Then I created an Azure Data Factory Pipeline, using a LinkedService (type= ‘AzureML’, using Request URI and API key of the ML-webservice) and a input and output dataset (‘AzureSqlTable’ type).
Deploying and Provisioning is succeeded. The pipeline starts as scheduled, but keeps ‘Running’ without any result. The pipeline activity is not being shown in the Monitor&Manage: Activity Windows.
On different sites and tutorials, I only find JSON-scripts using the activity type ‘AzureMLBatchExecution’ with BLOB in- and outputs. I want to use AzureSQL in- and output but I can’t get this working.
Can someone provide a sample JSON-script or tell me what’s possibly wrong with the code below?
Thanks!
{
"name": "Predictive_ML_Pipeline",
"properties": {
"description": "use MyAzureML model",
"activities": [
{
"type": "AzureMLBatchExecution",
"typeProperties": {},
"inputs": [
{
"name": "AzureSQLDataset_ML_Input"
}
],
"outputs": [
{
"name": "AzureSQLDataset_ML_Output"
}
],
"policy": {
"timeout": "02:00:00",
"concurrency": 3,
"executionPriorityOrder": "NewestFirst",
"retry": 1
},
"scheduler": {
"frequency": "Week",
"interval": 1
},
"name": "My_ML_Activity",
"description": "prediction analysis on ML batch input",
"linkedServiceName": "AzureMLLinkedService"
}
],
"start": "2017-04-04T09:00:00Z",
"end": "2017-04-04T18:00:00Z",
"isPaused": false,
"hubName": "myml_hub",
"pipelineMode": "Scheduled"
}
}
With a little help from a Microsoft technician, I've got this working. The JSON script as mentioned above is only changed in the schedule-section:
"start": "2017-04-01T08:45:00Z",
"end": "2017-04-09T18:00:00Z",
A pipeline is active only between its start time and end time. Because the scheduler is set to weekly, the pipeline is triggered at the start of the week: that date should be within start- and end date. For more details about scheduling, see: https://learn.microsoft.com/en-us/azure/data-factory/data-factory-scheduling-and-execution
The Azure SQL Input dataset should look like this:
{
"name": "AzureSQLDataset_ML_Input",
"properties": {
"published": false,
"type": "AzureSqlTable",
"linkedServiceName": "SRC_SQL_Azure",
"typeProperties": {
"tableName": "dbo.Azure_ML_Input"
},
"availability": {
"frequency": "Week",
"interval": 1
},
"external": true,
"policy": {
"externalData": {
"retryInterval": "00:01:00",
"retryTimeout": "00:10:00",
"maximumRetry": 3
}
}
}
I added the external and policy properties to this dataset (see script above) and after that, it worked.
I have just set up a Cachet status page but i am struggling to push updates to the components via it's API.
I am looking to take an existing JSON feed from a partner site and use this to update the status on my own page.
Here is a sample of the JSON data I need to pull:
{
"state":"online",
"message":"",
"description":"",
"append":"false",
"status":true,
"time":"Sat 23 Apr 2016 10:51:23 AM UTC +0000"
}
and below is the format Cachet uses in it's API.
{
"data": {
"id": 1,
"name": "Component Name",
"description": "Description",
"link": "",
"status": 1,
"order": 0,
"group_id": 0,
"created_at": "2015-08-01 12:00:00",
"updated_at": "2015-08-01 12:00:00",
"deleted_at": null,
"status_name": "Operational"
}
}
Never dealt with JSON stuff before, but I guess i need a script that i can run every X mins to grab the original data and do the following:
Convert the "state" from the original feed into Cachet ones.
Update the "updated_at" time with the time the script was last run.
Any help or tutorial's would be really appreciated.
Thanks!
I'm the Lead Developer of Cachet, thanks for trying it out!
All you need to do is update the Component status. Cachet will take care of the updated_at timestamp for you.
I'm unable to write the script for you, but you'd do something like this:
// This will be a lookup of the states from the service you're watching to Cachet.
serviceStates = [
'online' => 1,
'issues' => 2,
'offline' => 4,
]
// The id of the component we're updating
componentId = 1
// The state that is coming back from the service you're watching
state = 'online'
request({url: 'demo.cachethq.io/api/v1/components/'+componentId, data: { status: serviceStates[state] }})
Pseudo code, but you should be able to work from that.