Can I configure scaling for and ElasticBeanstalk instance at deploy time ?
There seems to be no option to do this, so then I have to go in after it is deployed and change it in the configuration settings
If you use ebextensions then yes. You can use autoscaling related option settings in your .ebextensions/01-scaling.config file.
You may want to use "MinSize" and "MaxSize" option settings in the namespace "aws:autoscaling:asg".
Example config file:
option_settings:
- namespace: aws:autoscaling:asg
option_name: MinSize
value: 3
- namespace: aws:autoscaling:asg
option_name: MaxSize
value: 6
Then your options are bundled with your application and you can configure them as part of environment startup and do not have to do a second deployment.
Documentation on option settings here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html
Related
I am able to install OKD on one node and scaleup on multiple node accordngly.
But now i want to install OKD with GlusterFS on one node and then extend this on multiple nodes.
Currently i am getting error that at least three nodes required. How i can bypass this check in ansible?
As per github documentations i have three options
Configuring a new, natively-hosted GlusterFS cluster. In this scenario, GlusterFS pods are deployed on nodes in the OpenShift cluster which are configured to provide storage.
Configuring a new, external GlusterFS cluster. In this scenario, the cluster nodes have the GlusterFS software pre-installed but have not been configured yet. The installer will take care of configuring the cluster(s) for use by OpenShift applications.
Using existing GlusterFS clusters. In this scenario, one or more GlusterFS clusters are assumed to be already setup. These clusters can be either natively-hosted or external, but must be managed by a heketi service.
Can option 2 or 3 be used to start with one node and extend accordingly? I have install glusterfs cluster on one node and extend it to second node but how to introduce in openshift?
https://imranrazakh.blogspot.com/2018/08/
I found one way to install glusterfs on one node, Find below all in one installation with glusterfs
Changed inventory file like below
[OSEv3:children]
masters
nodes
etcd
glusterfs
[OSEv3:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
ansible_ssh_user=root
openshift_deployment_type=origin
openshift_enable_origin_repo=false
openshift_disable_check=disk_availability,memory_availability
os_firewall_use_firewalld=true
openshift_public_hostname=console.1.1.0.1.nip.io
openshift_master_default_subdomain=apps.1.1.0.1.nip.io
openshift_storage_glusterfs_is_native=false
openshift_storage_glusterfs_storageclass=true
openshift_storage_glusterfs_heketi_is_native=true
openshift_storage_glusterfs_heketi_executor=ssh
openshift_storage_glusterfs_heketi_ssh_port=22
openshift_storage_glusterfs_heketi_ssh_user=root
openshift_storage_glusterfs_heketi_ssh_sudo=false
openshift_storage_glusterfs_heketi_ssh_keyfile="/root/.ssh/id_rsa
[masters]
1.1.0.1 openshift_ip=1.1.0.1 openshift_schedulable=true
[etcd]
1.1.0.1 openshift_ip=1.1.0.1
[nodes]
1.1.0.1 openshift_ip=1.1.0.1 openshift_node_group_name="node-config-all-in-one" openshift_schedulable=true
[glusterfs]
1.1.0.1 glusterfs_devices='[ "/dev/vdb" ]'
Now we have to hack ansible script as it expect three nodes by adding --durability none in following ansible script
openshift-ansible/roles/openshift_storage_glusterfs/tasks/heketi_init_db.yml
Following is updated snippet
- name: Create heketi DB volume
command: "{{ glusterfs_heketi_client }} setup-openshift-heketi-storage --image {{ glusterfs_heketi_image }} --listfile /tmp/heketi-storage.json --durability none"
register: setup_storage
As by default it create StorageClass which expect replicate environment, so we have to create custom storageclass like below with "volumetype: none"
oc create -f - <<EOT
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: glusterfs-nr-storage
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
parameters:
resturl: http://heketi-storage.glusterfs.svc:8080
restuser: admin
secretName: heketi-storage-admin-secret
secretNamespace: glusterfs
volumetype: none
provisioner: kubernetes.io/glusterfs
volumeBindingMode: Immediate
EOT
Now you can create storage dynamically from webconsole :) Any suggestions for improvement are welcome.
Next i will check how i can extend it?
I am deploying an app to Amazon Elastic Beanstalk, and their docs list a namespace in .ebextensions that can be used for setting the PHP.ini configuration:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_PHP.container.html
The namespace is aws:elasticbeanstalk:container:php:phpini.
I assumed that it was possible to add any PHP configuration, for instance disable_functions (a core PHP.ini directive), but it looks like this is impossible, and only some pre-determined configuration options are supported. EBS throws an error that the option is not supported.
How can I set additional PHP configurations in Amazon EBS?
Looks like the best way to do this is to use .ebextensions to add a new ini file that will be read by the PHP that EB installs. For example:
files:
"/etc/php.d/99-disable-functions.ini" :
mode: "000644"
owner: root
group: root
content: |
disable_functions = exec,shell_exec,passthru,proc_open,system,parse_ini_file,show_source
I have a django app running on Openshift 3. I need to run certain manage.py commands on a regular basis. In Openshift 2 I used the Cron gear and now in Openshift 3 I want to use the CronJob pod type.
I want to create a pod for the cronjob, use the same source as the django app is using, but not expose it.
For example:
W1 - Django app
D1 - Postgres DB
M1 - django app for manage.py jobs, run as a cronjob pod.
Any help is appreciated.
You want to use a scheduled job.
https://docs.openshift.com/container-platform/3.5/dev_guide/cron_jobs.html
https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/
https://blog.openshift.com/openshift-jobs/
Note that at this time (OpenShift 3.5), you have to use batch/v2alpha1 as the API version. Be careful of out of date documentation showing older version labels.
What I am not sure of is how you can easily reference the image associated with an existing imagestream produced when you used the S2I builder to build you application and you want to use the same image. The base Kubernetes object for this expects you to refer to the image from the image registry. You would thus need to work that out by looking at the imagestream and copying the image registry IP and image details over by hand.
UPDATE 1
See:
https://stackoverflow.com/a/45227960/128141
for details of how from OpenShift 3.6 you can have it resolve the imagestream name automatically. That mechanism is still alpha status in 3.6, but does work.
I've gotten it to work with specifying the image name in the YAML, but then tried to get it to work as part of the template, but ran into an error when trying to use the batch/v1 version on this server
Cannot create cron job "djangomanage". The API version batch/v1 for kind CronJob is not supported by this server.
My template code is
- apiVersion: batch/v1
kind: CronJob
metadata:
name: djangomanage
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: djangomanage
image: '${NAME}:latest'
env:
- name: APP_SCRIPT
value: "/opt/app-root/src/cron.sh"
restartPolicy: Never
CRON.SH
python /opt/app-root/src/manage.py
you need to update line 1 with this:
- apiVersion: batch/v1beta1
see link below:
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#cronjob-v1beta1-batch
it failes to pull the image with SHA256 digest identifier
Unfortunately this is a side-effect of DockerHub removing backwards compatibility for Docker 1.9 daemons. When images are pushed using Docker 1.10, pull-by-id will fail for older daemons (which includes OpenShift masters importing metadata from the Hub). You can work around this by pulling the centos image and pushing it to the internal registry.
At the current time, using Docker 1.9 on your hosts will avoid this issue.
You can apply a workaround for this issue by removing Image Change Trigger and removing the hash from image attribute in container spec.
Modify the build config:
strategy:
dockerStrategy:
from:
kind: ImageStreamTag
name: mysql-56-centos7
Replace to:
strategy:
dockerStrategy:
from:
kind: DockerImage
name: docker.io/centos/mysql-56-centos7:latest
We use Beanstalk to deploy node applications. It works very well. I've created a couple of config files in an .ebextensions directory, to apply configuration info to our apps when we load them up. Again mostly works well. I have one thing that does not, and that is defining the application health check URL. I can't get it to go. One odd thing about it, it seems to be only parameter I have come across so far that has spaces in it, and I'm wondering about that. I have tried enclosing the values in quotes, just to see if that is the problem, but it still doesn't work. Has anyone done this before, and can tell me whether it works, and if there is something syntactical about this that is incorrect? As I said, the rest of the params get set correctly in beanstalk, just the last one doesn't. Note #environment# gets replaced with a grunt script before this gets deployed.
Here's the config file:
option_settings:
- namespace: aws:elasticbeanstalk:application:environment
option_name: NODE_ENV
value: #environment#
- namespace: aws:elasticbeanstalk:container:nodejs
option_name: NodeVersion
value: 0.10.10
- namespace: aws:autoscaling:trigger
option_name: LowerThreshold
value: 40
- namespace: aws:autoscaling:trigger
option_name: MeasureName
value: CPUUtilization
- namespace: aws:autoscaling:trigger
option_name: UpperThreshold
value: 60
- namespace: aws:autoscaling:trigger
option_name: Unit
value: Percent
- namespace: aws:elasticbeanstalk:application
option_name: Application Healthcheck URL
value: /load_balance_test
Adding this worked for me:
# .ebextensions/healthcheckurl.config
option_settings:
- namespace: aws:elasticbeanstalk:application
option_name: Application Healthcheck URL
value: /health
- namespace: aws:elasticbeanstalk:environment:process:default
option_name: HealthCheckPath
value: /health
I discovered the second setting by doing eb config, which gives a nice overview of environment settings that can be overriden with option_settings in .ebextensions/yet-another.config files.
The spaces in this property name are weird, but it works when used with the alternative shorthand syntax for options:
option_settings:
aws:elasticbeanstalk:application:
Application Healthcheck URL: /
I use CloudFormation for EB, and in CF the syntax for that parameter is very strange. If that config file works the same as CF, the following string should work for you:
HTTP:80/load_balance_test
If you're using Terraform, then just make sure you have spaces in the name, and it will work fine:
setting {
namespace = "aws:elasticbeanstalk:application"
name = "Application Healthcheck URL"
value = "/api/health"
}
I just tried. It worked for me. Only the format specified in the original question worked for me i.e.,
option_settings:
- namespace: aws:elasticbeanstalk:application
option_name: Application Healthcheck URL
value: /api/v1/health/
you also might want to set the health_check_type to ELB instead of default EC2.
this is how I configured mine
$ cat .ebextensions/0090_healthcheckurl.config
Resources:
AWSEBAutoScalingGroup:
Type: "AWS::AutoScaling::AutoScalingGroup"
Properties:
HealthCheckType: "ELB"
HealthCheckGracePeriod: "600"
option_settings:
- namespace: aws:elasticbeanstalk:application
option_name: Application Healthcheck URL
value: /_status