Doing this (aws-sdk ruby gem):
s3_client.put_bucket_lifecycle_configuration({
bucket: bucket,
lifecycle_configuration: {
rules: [
{
id: "clean-temporary",
status: "Disabled", # required, accepts Enabled, Disabled
prefix: "temporary",
filter: {
prefix: "temporary",
},
expiration: {
days: 1,
},
},
],
},
})
I got an error: Aws::S3::Errors::BadRequest: [!]
According to Release Notes, my version (11) support it:
S3 bucket lifecycle API has been added. Note that currently it only supports object expiration.
What am I doing wrong?
UPD:
Tried the s3cmd, didn't help:
⇒ s3cmd -c s3cfg setlifecycle lifecycle_configuration.xml s3://my-new-bucket
ERROR: S3 error: 405 (MethodNotAllowed)
⇒ cat lifecycle_configuration.xml
<?xml version="1.0" encoding="UTF-8"?>
<LifecycleConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Rule><ID>test</ID><Status>Enabled</Status><Expiration><Days>1</Days></Expiration><Prefix></Prefix></Rule></LifecycleConfiguration>
Found a workaround:
s3cmd -c s3cfg setlifecycle lifecycle_configuration.xml s3://my-new-bucket --signature-v2
probably aws-sdk gem is not compatible with this feature.
Related
I´m trying to specify the API used by ansibles dynamic inventory.
Does anyone have experience with solving the incompatibility?
For a single host it might work as following:
The Ansible json output will be like this:
{
"ansible_host": "172.16.19.123",
"proxy": "somehost.domain.fake"
}
The openapi.yml
paths:
/api/inventory/host/:
get:
summary: Gets One Ansible Host
parameters:
- in: query
name: hostname
schema:
type: string
required: true
responses:
'200':
description: OK
content:
application/json:
schema:
$ref: '#/components/schemas/SingleHost'
components:
schemas:
SingleHost: # A Single Host for ansible-inventory --host [hostname] requests
type: object
properties:
ansible_host:
type: string
description: IP-Address of the host
ansible_port:
type: integer
description: ssh-port number of host
But when specifiyng the endpoint for ansible-inventory --list it gets tricky already..
example of a ansible inventory yml
{
"HostGroupName-A": {
"hosts": [
"Host-A"
]
},
"HostGroupName-B": {
"hosts": [
"Host-A",
"Host-B"
]
}
}
Should I just avoid using openapi to specify this?
I have a JSON with key pairs and I want to access the values from Rundeck Options dynamically during the job execution.
For shell script, we can do a $RD_OPTIONS_<>.
Similarly is there some format I can use in a JSON file?
Just use #option.myoption# in a inline-script step.
You need a tool to use on an inline script step to manipulate JSON files on Rundeck. I made an example using JQ. Alternatively, you can use bash script-fu to reach the same goal.
For example, using this JSON file:
{
"books": [{
"fear_of_the_dark": {
"author": "John Doe",
"genre": "Mistery"
}
}]
}
Update the file with the following jq call:
To test directly in your terminal
jq '.books[].fear_of_the_dark += { "ISBN" : "9999" }' myjson.json
On Rundeck Inline-script
echo "$(jq ''.books[].fear_of_the_dark += { "ISBN" : "#option.isbn#" }'' myjson.json)" > myjson.json
Check how looks on an inline-script job (check here to know how to import the job definition to your Rundeck instance).
- defaultTab: nodes
description: ''
executionEnabled: true
id: d8f1c0e7-a7c6-43d4-91d9-25331cc06560
loglevel: INFO
name: JQTest
nodeFilterEditable: false
options:
- label: isbn number
name: isbn
required: true
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- description: original file content
exec: cat myjson.json
- description: pass the option and save the content to the json file
fileExtension: .sh
interpreterArgsQuoted: false
script: 'echo "$(jq ''.books[].fear_of_the_dark += { "ISBN" : "#option.isbn#"
}'' myjson.json)" > myjson.json'
scriptInterpreter: /bin/bash
- description: modified file content (after jq)
exec: cat myjson.json
keepgoing: false
strategy: node-first
uuid: d8f1c0e7-a7c6-43d4-91d9-25331cc06560
Finally, check the result.
Here you can check more about executing scripts on Rundeck and here more about the JQ tool.
I have deployed an API-Platform app using JWT token to ElasticBeanstalk which, as usual, works fine in my local server.
On EB though it is denying access to logged in users despite the correct BearerToken being provided.
This is the error thrown:
{
"errors": [
{
"message": "Access Denied.",
"extensions": {
"category": "graphql"
},
"locations": [
{
"line": 6,
"column": 9
}
],
"path": [
"retrievedQueryUser"
]
}
],
"data": {
"retrievedQueryUser": null
}
}
The query in question attempts to retrieve user profile info through the below graphql config:
* "retrievedQuery"={
* "item_query"=UserProfileResolver::class,
* "normalization_context"={"groups"={"get-owner"}},
* "security"="is_granted('IS_AUTHENTICATED_FULLY') and object == user"
* },
So, it should be a simple matter of checking if the users IS_AUTHENTICATED_FULLY and if it is the user him/herself trying to execute the query.
Far as I could tell, by dump below on /vendor/symfony/security-core/Authorization/AuthorizationChecker.php, it's failing to retrieve a token.
var_dump($this->tokenStorage->getToken()->getUser()->getUsername());
I did a cursory comparison of phpinfo() between my local installation and the one at AWS-EB and could not find any obvious mismatch.
This is the config for JWT at /config/packages/lexik_jwt_authentication.yaml.
lexik_jwt_authentication:
secret_key: '%env(resolve:JWT_SECRET_KEY)%'
public_key: '%env(resolve:JWT_PUBLIC_KEY)%'
pass_phrase: '%env(JWT_PASSPHRASE)%'
user_identity_field: email
token_ttl: 1800
Just to confirm that the users are able to login. It's passing through the isGranted() check that fails.
Any ideas?
EDIT - add `/config/packages/security.yaml
security:
# https://symfony.com/doc/current/security.html#where-do-users-come-from-user-providers
encoders:
App\Entity\User:
algorithm: auto
#algorithm: bcrypt
#algorithm: argon2i
cost: 12
providers:
database:
entity:
class: App\Entity\User
property: email
firewalls:
dev:
pattern: ^/(_(profiler|wdt)|css|images|js)/
security: false
refresh:
pattern: ^/api/token/refresh
stateless: true
anonymous: true
api:
pattern: ^/api
stateless: true
anonymous: true
json_login:
check_path: /api/login_check
success_handler: lexik_jwt_authentication.handler.authentication_success
failure_handler: lexik_jwt_authentication.handler.authentication_failure
guard:
authenticators:
- app.google_login_authenticator
- App\Security\TokenAuthenticator
entry_point: App\Security\TokenAuthenticator
user_checker: App\Security\UserEnabledChecker
access_control:
- { path: ^/login, roles: IS_AUTHENTICATED_ANONYMOUSLY }
- { path: ^/admin, roles: ROLE_SUPERADMIN }
- { path: ^/api/token/refresh, roles: IS_AUTHENTICATED_ANONYMOUSLY }
- { path: ^/api, roles: IS_AUTHENTICATED_ANONYMOUSLY }
role_hierarchy:
ROLE_PROVIDER: ROLE_USER
ROLE_ADMIN: [ROLE_PROVIDER, ROLE_EDITOR]
ROLE_SUPERADMIN: ROLE_ADMIN
Upon further research I found out that Apache was stripping the authorization token from the request.
On the method supports of /lexik/jwt-authenticator-bundle/Security/Guard/JWTTokenAuthenticator, the dump as below will not include the token on AWS:
var_dump($request->headers->all());
var_dump($_SERVER);
As per this question, this is an issue of Apache configuration which is not accepting the authorization headers.
The indicated solution is to add the following to .htaccess:
SetEnvIf Authorization "(.*)" HTTP_AUTHORIZATION=$1
This resolves the issue, though one should note that the local Apache installation works fine without the above edit to .htaccess.
So, it should also be possible to change Apache config directly, but I could not find how to go about it.
EDIT: Later I found a specific instruction on 'JWT-Token' docs as follows, that confirm that solution on this link.
I have behat 3.5 up and running fine on windows 10. Now I wish to publish test results in html format. I installed this plugin https://github.com/dutchiexl/BehatHtmlFormatterPlugin
But how do I run behat tests utilizing this plugin? If I type "behat" I only see test steps in text format on the console. If I type "behat --format html --out test.feature.html --config behat.yml" I get an html output that looks "ugly".
My composer.json:
{
"require": {
"behat/behat": "~3.0",
"behat/mink": "~1.7#dev",
"behat/mink-goutte-driver": "1.2.1",
"behat/mink-selenium2-driver": "~1.3.1" ,
"behat/mink-extension": "*"
},
"config": {
"bin-dir": "bin/"
},
"require-dev": {
"emuse/behat-html-formatter": "^0.2.0"
}
}
My behat.yml:
default:
extensions:
Behat\MinkExtension:
default_session: goutte
goutte: ~
selenium2:
wd_host: "http://127.0.0.1:4444/wd/hub"
capabilities: { "browserName": "firefox", "browser": "firefox", "version": "", "platform": "WINDOWS" }
browser_name: firefox
emuse\BehatHTMLFormatter\BehatHTMLFormatterExtension:
name: html
renderer: Twig,Behat2
file_name: index
print_args: true
print_outp: true
loop_break: true
suites:
default:
contexts:
- emuse\BehatHTMLFormatter\Context\ScreenshotContext:
screenshotDir: build/html/behat/assets/screenshots
- FeatureContext
formatters:
html:
output_path: %paths.base%/build/html/behat
I found the details to make this work at https://packagist.org/packages/emuse/behat-html-formatter - I am using behat 3.6.1
After installing the html formatter with composer: composer require --dev emuse/behat-html-formatter I made my behat.html file look like this:
default:
suites:
default:
contexts:
- FeatureContext
- Drupal\DrupalExtension\Context\DrupalContext
- Drupal\DrupalExtension\Context\MinkContext
- Drupal\DrupalExtension\Context\MessageContext
- Drupal\DrupalExtension\Context\DrushContext
- emuse\BehatHTMLFormatter\Context\ScreenshotContext:
screenshotDir: report/html/behat/assets/screenshots
formatters:
html:
output_path: report/html/behat
extensions:
Drupal\MinkExtension:
goutte: ~
selenium2: ~
base_url: http://tea.ddev.site
Drupal\DrupalExtension:
blackbox: ~
emuse\BehatHTMLFormatter\BehatHTMLFormatterExtension:
name: html
renderer: Twig,Behat2
file_name: index
print_args: true
print_outp: true
loop_break: true
Now when I run the behat tests, the output goes to behat/report/html/behat/index.html
I don't need to specify the output as html, it automatically does that.
I cloned the github project of figway in order to query the attributes of the entities to the orion but i'm getting an error in all python scripts:
File "GetEntity.py", line 37, in <module>
config = ConfigParser.RawConfigParser(allow_no_value=True)
TypeError: __init__() got an unexpected keyword argument 'allow_no_value'
I called it like -> python GetEntity.py Room
Some tips to investigate what is going on:
You should be using Python2.7 to run these scripts. Can you please let me know which version and OS are you using?
We have updated FIGWAY last week. Can you please clone it again if you did it before?
You should be using the new scripts at folder: /python-IDAS4/ContextBroker
With the previous assumptions you should get something like this (as long as that entity does not exist on that ContextBroker at the time being):
i6#raspberrypi ~/github/fiware-figway/python-IDAS4/ContextBroker $ python GetEntity.py Room
* Asking to http://130.206.80.40:1026/ngsi10/queryContext
* Headers: {'Fiware-Service': 'OpenIoT', 'content-type': 'application/json', 'accept': 'application/json', 'X-Auth-Token': 'NULL'}
* Sending PAYLOAD:
{
"entities": [
{
"type": "",
"id": "Room",
"isPattern": "false"
}
],
"attributes": []
}
...
* Status Code: 200
* Response:
{
"errorCode" : {
"code" : "404",
"reasonPhrase" : "No context element found"
}
}