to use cron job on my laravel application in elastic-beanstalk - amazon-elastic-beanstalk

i am deploy laravel application on elastic-beanstalk so i am appling a cronjob on url
http://prodweb-env.eba-4rtbkvxj.us-east-1.elasticbeanstalk.com/index.php/api/auto_del
this url when we upload and download the api then this url is deleted the api so that cronjob config file is deploy successfully but its not working properly so please give a solution how can i resolve it
files:
"/etc/cron.d/mycron":
mode: "000644"
owner: root
group: root
content: |
0 * * * * root curl http://prodweb-env.eba-4rtbkvxj.us-east-1.elasticbeanstalk.com/index.php/api/auto_del
commands:
remove_old_cron:
command: "rm -f /etc/cron.d/mycron.bak"
so when i was run command crontab -l then its shows no cronjob run on ec2-user or root so please help se out to solve this issue
so when i was run command crontab -l then its shows no cronjob run on ec2-user or root so please help se out to solve this issue
and i am expect that to give me solution and i am solve my problem

Related

Run bash script after MySQL Docker container starts (every time, not just the initial time)

I am trying to get a bash script to run when my MySQL container starts. Not the initial time when there are no databases to create, but subsequent times (so placing the files in docker-entrypoint-initdb.d will not work).
My objective is to re-build my container with some database upgrade scripts (schema changes, etc). The thought being I deploy the container with the initial scripts and deploy subsequent updates with my database upgrades as the application ages. It seems like this would be an easy task, but I am having trouble.
Most of the things I have tried came from suggestions I found googling. Here are things I have tried with no success:
Modify the entrypoint.sh (and /usr/local/bin/docker-entrypoint.sh) in the Dockerfile build to add in a call to my script.
This does not even seem to be called, which I suspect is a sign, but my database starts (also note it creates my schema fine the first time)
I do this with a RUN sed in my Dockerfile and have confirmed my changes exist after the container starts
Tried running my script on startup by:
adding a script to /etc/r.d/rc.local
adding a restart cron job (well, I tried, but the Oracle Linux distro doesn’t have it)
— Modifying the /etc/bashrc
— Adding a script to /etc/profile.d/
— Appending to /etc/profie.d/sh.local
Tried adding a command to my docker-compose.yml, but it said that wasn’t found.
My actual database upgrade script works great when I log in to the container manually and execute it. All of my experiments above have been just touching a file or echoing to a file as a proof of concept. Once I get that working, I'll add in the logic to wait for MySQL to start and then run my actual script.
Dockerfile:
FROM mysql:8.0.32
VOLUME /var/lib/mysql
## these are my experiments
RUN sed -i '/main "$#"/a echo "run the script here" > /usr/tmp/XXX' /entrypoint.sh
RUN sed -i '/main "$#"/a echo "run the script here" > /usr/tmp/XXX' /usr/local/bin/docker-entrypoint.sh
RUN echo "touch /usr/tmp/XXX" >> /etc/profile.d/sh.local
RUN sed -i '/doublesourcing/a echo “run the script here > /usr/tmp/XXX' etc/bashrc
I build and run it using:
docker build -t mysql-database -f Dockerfile .
docker run -it --rm -d -p 3306:3306 --name database -v ~/Docker-Volume-Share/database:/var/lib/mysql mysql-database
Some other information that may be useful
I am using a volume on the host. I’ve run my experiments with an existing schema as well as by deleting this directory so it starts fresh
I am using mysql:8.0.32 as the image (Oracle Linux Server release 8.7)
Docker version 20.10.22, build 3a2c30b
Host OS is macOS 13.2.1
Thanks in advance for any tips and guidance!
It sounds like you are trying to run a script after the MySQL container has started and the initial setup has been completed. Here are a few suggestions:
1-Use a custom entrypoint script
You can create a custom entrypoint script that runs after the default entrypoint script included in the MySQL container image. In your Dockerfile, copy your custom entrypoint script into the container and set it as the entrypoint. Here's an example:
FROM mysql:8.0.32
COPY custom-entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/custom-entrypoint.sh
ENTRYPOINT ["custom-entrypoint.sh"]
In your custom entrypoint script, you can check if the database already exists and run your upgrade script if it does. Here's an example:
#!/bin/bash
set -e
# Run the default entrypoint script
/docker-entrypoint.sh "$#"
# Check if the database already exists
if mysql -uroot -p"$MYSQL_ROOT_PASSWORD" -e "use my_database"; then
# Run your upgrade script
/path/to/upgrade-script.sh
fi
2-Use a Docker Compose file
If you're using Docker Compose, you can specify a command to run after the container has started. Here's an example:
version: '3'
services:
database:
image: mysql:8.0.32
volumes:
- ~/Docker-Volume-Share/database:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: mypassword
command: >
bash -c "
/docker-entrypoint.sh mysqld &
while ! mysqladmin ping -hlocalhost --silent; do sleep 1; done;
/path/to/upgrade-script.sh
"
This command runs the default entrypoint script in the background, waits for MySQL to start, and then runs your upgrade script.
I hope these suggestions help you achieve your goal!

Docker container: /bin/sh: cat: No such file or directory

I'm using the mysql/mysql-server image to create a mysql server in docker. Since I want to setup my database (add users, create tables) automatically, I've created a SQL file that does that for me. In order to automatically run that script, I extended the image with this dockerfile:
FROM mysql/mysql-server:latest
RUN mkdir /scripts
WORKDIR /scripts
COPY ./db_setup.sql .
RUN mysql -u root -p password < cat db_setup.sql
but for some reason, this happens:
/bin/sh: cat: No such file or directory
ERROR: Service 'db' failed to build : The command '/bin/sh -c mysql -u root -p password < cat db_setup.sql' returned a non-zero code: 1
How do I fix this?
You can just remove the cat command from your RUN command:
RUN mysql -u root -p password < db_setup.sql
No such file or directory is returned since cat cannot be found in the current directory set by WORKDIR. You can just redirect the stdin of mysql to be from the db_setup.sql file. Edited to clarify < sh redirection is expecting the file name to use for input.
EDIT 2: Keep in mind your example is a RUN command that is attempting to run mysql and creating a layer at docker image build time. You may want to have this run during the mysql entrypoint script at runtime instead (e.g. scripts are run from thedocker-entrypoint-initdb.d/ directory by the docker-entrypoint.sh script of the official mysql image) or using other features that are documented for the official image.
RUN is a build time command. MySQL isn't running at this point.
If you where/are using a standard image there is a location for database initialization:
FROM mysql:8.0
COPY db_setup.sql /docker-entrypoint-initdb.d
Command cat is not present in mysql/mysql-server:latest image.
Moreover, you would only need to provide filename afetr redirection.
RUN mysql -u root -p password < db_setup.sql

registry URL and process for installing an external docker image on openshift online (v3)

I am using the Openshift Online platform. I am trying to build a custom docker image locally (on my mac) and push it to the registry of my project on Openshift online.
I am unable to do that. Can someone please advise what the registry URL should be?
I have tried using the following:
registry.starter-us-east-1.openshift.com
registry.access.redhat.com
The full command I am trying to use to login is below however I am not getting a response. The screen just waits.
docker login -u username -e any_email_address -p token_value registry_service_host:port
My intent, after completing above, is to then try and push the image that I have built locally.
Any advice on the above or else alternate approaches would be appreciated. Thank you.
For to discover Openshift Online URL registry, use the following steps bellow:
After you clicked "Copy Login Command" buttom, you copy oc login command;
Run oc login command in the terminal;
Afterwards login, run oc registry info in the terminal.
The registry is at --> registry.<cluster-id>.openshift.com.
For starter tier US East region, the cluster id is --> starter-us-east-1.
So, the registry can be found at --> registry.starter-us-east-1.openshift.com.
Once you know the docker registry endpoint, you can follow the instructions at:
https://docs.openshift.com/online/dev_guide/managing_images.html#accessing-the-internal-registry
to login and pull/push images from/to the registry.
In short, use:
docker login -u `oc whoami` -e `oc whoami` -p `oc whoami -t` \
https://registry.starter-us-east-1.openshift.com
For future reference, the details for accessing the registry will appear in the About page from the help drop down menu, albeit right now for Online that change hasn't managed to propogate into production, although already visible in newer versions of OpenShift.
The OpenShift internal registry is used internally by default to import images from external repositories. If you need to use it as a repository to pull and push images from your machine, you have to run the following command to allow the default route.
oc patch config.imageregistry cluster -n openshift-image-registry --type merge -p '{"spec": {"defaultRoute": true}}'
Then run
oc get route -n openshift-image-registry
to find the registry URL.
When pushing an image, use the following way to push it to the required project.
[URL]/[project]/[image]:[tag]
To login using docker or podman.
TOKEN = $(oc whoami -t)
podman login -u anything -p ${TOKEN} [URL]

Permission denied errors when creating app with custom OpenShift cartridge

I'm using OpenShift Origin and developing a cartridge for the first time. When my bin/install and bin/control scripts are running I've noticed "Permission denied" errors when they try to access anything in the cartridge usr dir. In the node platform.log I see the offending command that OpenShift runs looks like this (where my bin/control start tries to run a script in usr):
/sbin/runuser -s /bin/sh 5351e627ee5a934f290001d2 -c "exec /usr/bin/runcon 'unconfined_u:system_r:openshift_t:s0:c0,c1004' /bin/sh -c \"set -e; /var/lib/openshift/5351e627ee5a934f290001d2/mycart/bin/control start \""
Since the usr dir is a symlink I originally thought it was related to that, but now I think it's related to selinux (which I don't know much about). If I do a "ls -Z" on my app's cartridge dir the files are "system_u:object_r:openshift_var_lib_t:s0:c0,c1004" but the contents of the usr dir are "unconfined_u:object_r:default_t:s0", so it doesn't match what's in the above command.
I used the oo-admin-cartridge command to install the cartridge to my Origin VM.
Any ideas on how to fix this?
What I ended up doing was running "chcon -R -u system_u -t bin_t usr/" before installing the cartridge with oo-admin-cartridge. Built-in cartridges are not affected by this problem (checked nodejs), so I feel like it might be a oo-admin-cartridge bug. I would expect it to massage the selinux permissions instead of using whatever I provide.

Something goes wrong with the SSH while setting up hadoop

I'm a new fish for hadoop.I installed Ubuntu 12.10 on my computer and I wanna install Hadoop in pseudo-distributed mode on one single node.I searched and get lots of tutorials but I have a problem with the SSH.I did what the tutorial said.
I am sure the problem is about the SSH.I get the openssh-server,and had done this:
hadoop00#WebsoftStation:~$ssh-keygen -t dsa -P "" -f ~/.ssh/id_dsa
hadoop00#WebsoftStation:~/.ssh$cat ~/.ssh/id_dsa.pub >> authorized_keys
Then I can successfully ssh my localhost like this:
hadoop00#WebsoftStation:~$ssh localhost
It worked.
So I changed the path to hadoop and then:
hadoop00#WebsoftStation:/usr/local/hadoop$ sudo bin/start-all.sh
[sudo] password for hadoop00:
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-namenode-WebsoftStation.out
root#localhost's password:
root#localhost's password: localhost: Permission denied, please try again.
So,what's the problem?
You have setup password-less ssh for only your current account. Since, when you can use ssh localhost without any problem, the thing you need to do next is giving execution permission to your scripts.
Execute the following commands:
chmod +x bin/*.sh ---> assigns execution permission to all the scripts
./start.all ----> executes the script
Note: Hadoop can also be run without having password-less ssh setup using hadoop-daemon.sh script. The only advantage with password-less ssh is that, the ./start.all, script will take the trouble of doing that on behalf of you in each of the nodes.
You need to change permissions for your Hadoop folder to be owned by the hadoop00 user:
cd /usr/local/
sudo chown -R hadoop00:hadoop00 /usr/local/hadoop
Then you can cd into the sbin folder and run things without sudo. If you use sudo you're running the scripts as root which has different environment variables etc which is why you have a different behavior.
Why are you using sudo this is clearly a permission problem.
Try running this without sudo
bin/start-all.sh