I have a CF that deploys a MySql database and some resources to AWS. I want the script to be general and should be able to use it for different environments. For one of those resources (master db), I have a different security group configuration which is environment-specific. I create security groups for each environment conditional and are called VaultSecurityGroupInEnv1, VaultSecurityGroupInEnv2, etc. There is map that saves names of security groups for each environment. here are my configurations:
Mappings:
RegionMap:
environment1:
VaultSG: VaultSecurityGroupInEnv1
environment2:
VaultSG: VaultSecurityGroupInEnv2
Resources:
VaultSecurityGroupInEnv1:
Condition: IsEnv1Environment
VaultSecurityGroupInEnv2:
Condition: IsEnv2Environment
MasterDB:
Type: AWS::RDS::DBInstance
Properties:
VPCSecurityGroups:
- !ImportValue DbSgId
- !Sub
- '${vGroup}'
- vGroup: !FindInMap
- RegionMap
- !Ref Environment
- VaultSG
for which I get the following error:
Invalid security group , groupId= vaultsecuritygroupinF.groupid, groupName=. (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterValue;
The output from !Sub is retrieved and resolved as a name string, not as a resource. Using !Ref vaultsecuritygroupinF.GroupId works fine. any idea how to use map and sub correctly?
Thanks
You can't use FindInMap the way you are trying. It will just resolve to the literal strings VaultSecurityGroupInEnv1 or VaultSecurityGroupInEnv2. It will not resolve to the actual resources of the same name.
Instead, I think the following should be possible:
MasterDB:
Type: AWS::RDS::DBInstance
Properties:
VPCSecurityGroups:
- !ImportValue DbSgId
- !If
- IsEnv1Environment
- !Ref VaultSecurityGroupInEnv1
- !Ref "AWS::NoValue"
- !If
- IsEnv2Environment
- !Ref VaultSecurityGroupInEnv2
- !Ref "AWS::NoValue"
Related
I need to make some changes to the code, which was written by another developer. One of them is to use the ST_DISTANCE_SPHERE function in a query. I added this function to the database following this link
But I realized that it's not enough, cause the application is using Doctrine. I'm not using Doctrine in my usual applications, so I'm not pretty sure what should I do.
Until now I ran composer require creof/doctrine2-spatial in console
And I added to config/package/doctrine.yaml below code
doctrine:
dbal:
url: '%env(resolve:DATABASE_URL)%'
types:
geometry: CrEOF\Spatial\DBAL\Types\GeometryType
point: CrEOF\Spatial\DBAL\Types\Geometry\PointType
What should I do more to be able to use this function in my Repository? Error, which I'm getting is:
Doctrine\ORM\Query\QueryException:
[Syntax Error] line 0, col 70: Error: Expected known function, got 'ST_DISTANCE_SPHERE'
at vendor\doctrine\orm\lib\Doctrine\ORM\Query\QueryException.php:54
Based on the error message, you may need something like:
doctrine:
dbal:
url: '%env(resolve:DATABASE_URL)%'
types:
geometry: CrEOF\Spatial\DBAL\Types\GeometryType
point: CrEOF\Spatial\DBAL\Types\Geometry\PointType
orm:
dql:
numeric_functions:
# for postgresql
stdistance: CrEOF\Spatial\ORM\Query\AST\Functions\PostgreSql\STDistance
Thanks to #Nicodemuz answer I finally find the right way:
orm:
dql:
numeric_functions:
stdistance: CrEOF\Spatial\ORM\Query\AST\Functions\Mysql\STDistance
stdistancesphere: CrEOF\Spatial\ORM\Query\AST\Functions\Mysql\STDistanceSphere
distance: CrEOF\Spatial\ORM\Query\AST\Functions\MySql\Distance
geometrytype: CrEOF\Spatial\ORM\Query\AST\Functions\MySql\GeometryType
point: CrEOF\Spatial\ORM\Query\AST\Functions\MySql\Point
The second important thing was that I had to use in my query STDistanceSphere name instead of ST_DISTANCE_SPHERE to make it work fine.
Third thing, it may be helpful for MySql users like me to see this. It looks like this function is available for MySql, but it's not merged, so you should add to package some files from here .
I have a workflow_dispatch trigger like blow:
I need to define validation, so that logLevel must be within specific set of values ['info', 'warning', 'error']
I know I can do bash if commands and check for values. But I prefer not doing that.
Is there a built-in way of doing that?
Manually triggered workflows in Github Actions now supports choice as an input type, so you can do something like this and the user will see this as a dropdown.
on:
workflow_dispatch:
inputs:
logLevel:
type: choice
description: Log level
default: warning
options:
- info
- warning
- error
I don't think there's currently a built-in way to do this, but you can accomplish it by doing something like:
Create a validator script (eg. .github/scripts/validateInputs.js)
if (['info', 'warning', 'error'].includes(process.env.logLevel)) {
console.log("logLevel must be either 'info', 'warning' or 'error'");
process.exit(1);
}
Add it as a new step to your job
- name: Validate logLevel
run: logLevel=${{ github.event.inputs.logLevel }} node .github/scripts/validateInputs.js
For context I have an application, and it depends on the mysql chart. I've set up the mysql stable chart as a dependent chart in myapp chart.
I have a very large set of sql files, and due to their size, I need to pack them into a specialized seed container. Using the standard helm chart, I can pass in a seed container to init my database as shown in the below values.yaml snippet.
Are there any strategies to get subchart values created at runtime into my values.yaml?
mysql:
extraInitContainers: |
- name: init-seed
image: foobar/seed:0.1.0
env:
- name: MYSQL_HOSTNAME
value: foobar-mysql
- name: MYSQL_USER
value: foo
- name: MYSQL_PASS
value: bar
I've tried ways to do the below, but to no avail.
a. Templatize and pass a service name into the MYSQL_HOSTNAME env var
b. Pass the {{ include "mangos_zero.fullname" . }} helper into this value
c. Find the name of the other container within the mysql pod at runtime?
How can I get the service name of the mysql-chart or it's container name passed into my init pod?
Not into your values.yaml but yes into your templates. Assuming you are using Helm v3 you can use the lookup function. For example, wherever you need the service name from your MySQL DB to create your seed data.
(lookup "v1" "Service" "mynamespace" "mysql-chart").metadata.name
not sure where to start but hre is what i have and what i'm trying to do.
what i have.
i have three Minions as part of three tier application named employee.
there is a three servers called web01 as web server, app01 as app server and a db01 as database server.
each server has a grains value on it,
here is each server and the grains values and keys of these values.
web01.
grains value =
appname:employee and
tier:web
app01.
grains value =
appname:employee and
tier:app
db01.
grains value =
appname:employee and
tier:db
what i'm trying to do.
i'm trying to push configurations files on web01 and app01, these config files has a variables (hostname of another tier minion).. the config on the web01 should have the name app01.. and the config on app01 should have the name db01.. the name of these severs should be grabbed based on the grains value.
for example.
the host name of the app server, its the server that has grains value equal to "appname:employee and tier:app"
not sure how to do it.
too new to salt and i dont have much experiance with it nor jinja template.
any help will be really appreciated.
Thank you
So if I understand you right, you want the config file to be on web1 and app1 containing all hostnames.
If so, you can use a pillar file where you state these attributes.
/srv/pillar/employee.sls:
employee:
hostname_of_another_tier_minion: hostname.example.com
You can then reference this in your jinja template /srv/formulas/employee/templates/config.conf.jinja:
----------
hostname_of_another_tier_minion {{ pillar['employee']['hostname_of_another_tier_minion'] }}
Just to be complete you reference your template in /srv/employee/web.sls and /srv/employee/app.sls:
web-config-file:
file.managed:
- user: root
- group: root
- template: jinja
- mode: '0644'
- names:
- /etc/<web-conf-dir>/web.conf:
- source: salt://employee/templates/config.conf.jinja
Let me know if you have any further questions.
UPDATE:
If the hostnames are unknown as you said, you can first get them with grains and then put them in the jinja template that gets rendered into a config on every server.
I launched the windows instance. I found the config.yml file in the oracle cloud agent folder under program files.
config.yml file looks like below
telemetry:
endpoint_format: 'https://telemetry-ingestion.{}.oraclecloud.com'
endpoint_path: /20180401/metrics
submission_headers:
accept: application/json
content-type: application/json
get_headers:
accept: application/json
metrics:
- friendly_name: CPU Utilization
name: CpuUtilization
unit: Percent
min_range: 0
max_range: 100
- friendly_name: Memory Utilization
name: MemoryUtilization
unit: Percent
min_range: 0
max_range: 100
.
.
.
- friendly_name: Thread Count
name: ThreadCount
unit: Count
perfmon:
metrics:
- path: \Processor(_Total)\% Processor Time
telemetry_metric_name: CpuUtilization
type: double
- path: \Memory\% Committed Bytes In Use
telemetry_metric_name: MemoryUtilization
type: double
.
.
.
- path: \Process(_total)\Thread Count
telemetry_metric_name: ThreadCount
type: double
I have added Thread Count metric in the file.
I queried the list metrics API but did not found the added metric(Thread count).
Is this the correct way of adding more metrics? If yes, any other flow has to be done in order to fetch metrics through rest api?
The developer of the oracle cloud agent here. The configuration is structured this way for future extensibility. Currently, if you add the metric in the config, the agent will try to get the metric from the OS and attempt to submit to the telemetry service backend. The telemetry service will reject the agent's attempt as it only supports a fixed set of metrics, and your is out of that range.