how can i allow aws IAM permission only specific time - json

I am trying to make IAM polish to only work from 9AM to 10AM.
Since I am not corder, so I am struggle with this...
help me plz....
I tried to add conditions:
"Condition": {
"DateGreaterThan": {"aws:CurrentTime": "2021-04-20T09:00:00Z"},
"DateLessThan": {"aws:CurrentTime": "2021-04-20T10:00:00Z"}
but.. in this way, i have to update manually every day..
So i tried ...
"Condition": {
"DateGreaterThan": {"aws:CurrentTime": "****-**-**T09:00:00Z"},
"DateLessThan": {"aws:CurrentTime": "****-**-**T10:00:00Z"}
but this is not working.
How can i accomplish working depends on time not depends on date..?
Thank you

You can't use wildcards in datetime operations in IAM. From docs
Wildcards are not permitted for date condition operators.
So you either have to use full date/time or timestamp. To automate this, you can create a lambda function that is triggered on a schedule. The lambda would use AWS sdk to update your policy everyday.

Related

Activate the zabbix trigger at a certain time

I want to make the zabbix trigger work strictly at a certain time.
I read in the zabbix documentation that the time() function can help with this.
{test.domain:web.test.fail[trigger].last()}=0 and time()>000000 and time()<060000
But when i save the trigger, I get an error:
Invalid parameter "/1/expression": incorrect trigger expression starting from " time()>000000 and time()<060000".
Zabbix 5.0.18
I will be grateful for your help.
Zabbix triggers are evaluated each time a new values is received.
You can check an item at cetain times using Custom Intervals.
See: https://www.zabbix.com/documentation/current/en/manual/config/items/item/custom_intervals

Query Google Admin User directory comparing parameters

I'm trying to filter my users list by comparing two parameters
query="EmployeeData.EmployeeID=externalId"
EmployeeData.EmployeeID is a custom schema that is populated, with a cron job, with the same value as externalId.
Of course I let the cron do the field copy only if necessary, this is the reason I'm trying to filtering the users list.
In the way i wrote seems that the query trying to looking for a value "externalId" into the EmployeeData.EmployeeID ignoring that "externalId" is a even a field
any suggestion?
The way your code is written, the query sent to Google's servers is as you correctly guessed the following:
EmployeeData.EmployeeID=externalId where your actual externalId is not sent but rather the string "externalId".
To replace this string for the actual value of your variable, you can use what is called "string concatenation" 1. To do it, you just need to modify your code as shown below:
query="EmployeeData.EmployeeID=" + externalId;
This way, the query will be sent as you need to Google's servers.

Apache Beam PubSubToBigQuery.java duplicate removal?

I am using PubSubToBigQuery.java code without any changes. Would someone please show me how to remove duplicate records during this process?
I know the trick is to create Window and use GroupBy but really not know how to write it.
Thanks
Assuming you just want to filter duplicate on successful parsed events. You will need to add some code after this line:
transformOut
.get(TRANSFORM_OUT)
.apply("keyed", WithKeys.of(/* choose your key from table row to identify duplicates */))
.apply(GroupByKey.create())
.apply("dedup", ParDo.of(new DoFn<KV<String, Iterable<TableRow>>, TableRow>() {
public void ProcessElement(ProcessContext context) {
// only output one element from list to dedup.
context.output(context.element().getValue().iterator().next());
}
}
))
.apply(Window.configure().triggering(/* choose your trigger */)
.apply(
"WriteSuccessfulRecords",
BigQueryIO.writeTableRows()
.withoutValidation()
.withCreateDisposition(CreateDisposition.CREATE_NEVER)
.withWriteDisposition(WriteDisposition.WRITE_APPEND)
.to(options.getOutputTableSpec()));
BeamSQL actual tries to support your use case (check PubsubToBigqueryIT.java). BeamSQL allows you create table on pubsub topic and bigquery table. Read from pubsub, convert pubsub messages and write to BQ table are handled by BeamSQL already. SQL can be applied to data that read from pubsub. However, BeamSQL might miss some features (for example, ANY_VALUE aggregation function if you want use group by to dedup in SQL) to finish your task.

zabbix Trigger with user macros

I need help around triggers and using user macros in them. Using zabbix 3.4. I have a host and it has macro called '{$CLASS_A}'
I want to setup a Trigger that goes off when {$CLASS_A} = "HUGE" and free memory is less the 5G.
{my_test_server.vm.memory.size[available].last()}<5G
Can I not just do:
{$CLASS_A} = "HUGE" AND {my_test_server.vm.memory.size[available].last()}<5G
I can not see what I should be doing to get this to work. Any help would be great.
The "and" operator is case-sensitive and should be lowercase.
The macro usage is incorrect as well: you can use a macro on the right portion of the expression (see here for more), like:
{ca_001:system.cpu.load[,avg1].last()}>{$MAX_CPULOAD}
You can modify your current trigger to:
{my_test_template:vm.memory.size[available].last()}<{$MAX_MEMORY}
then define {$MAX_MEMORY} both on the template and the host: the template macro value will act as default and you can ovverride it with a host macro.

zabbix regex to trigger for wrong data type

I have an item of type float, but sometimes a string is received in case of error instead of a number. How can I make a trigger regexp to fire in this case?
I have no idea now to check for "wrong data type".
Actually this is by design and what I'm trying to do is this: if the data gathering fails, I send an error message in order to see it on zabbix end.
I tried with nodata(0), but this doesn't seem to work.
In you case zabbix will not store the "wrong" value for the item. And if you don't care what the string is then you can just setup a trigger for "nodata" for the period of your interval. Look in the triggers manual and search for the "nodata".
Edit: scratch that, didn't read the whole question ....
Edit2: if you are certain that this is not working by design and not because your trigger interval misses the data interval, then you can try to catch the unsupported status. There is an open request for the functionality, but you can setup a side script similar to this. Or you can wrap the monitored item on the node into a UserParameter script that reads the value and prints -1 or something if it is not a number. Then proceed with a normal numeric trigger.