Elasticsearch 7.17.7 use up to 8.1GB ram on Linux Ubuntu 22.04.1. LTS causing slowness on device - elasticsearch-7

I install Elasticsearch 7.17.7 on my local machine Linux Ubuntu 22.04.1 LTS ( not using virtual machine )
To start elasticsearch I run this command in terminal
sudo systemctl start elasticsearch.service
After it successfully start and run I notice it utilize up to 8.1GB of ram which is too huge from what I see, my machine only have 16GB of utilizeable ram
Attached is the elasticsearch info and the stacer output
From time to time I will check to see whether the elasticsearch is running or not because I realize when I am using my laptop doing work and everything, opening tools such as google chrome, vs code and smartgit, at one point my device becomes really slow, then I try to run certain command needed for work and it return error
Elasticsearch\Common\Exceptions\NoNodesAvailableException: No alive nodes found in your cluster
I check and found the error happen because elasticsearch suddenly stop working
I am able to restart elasticsearch using
sudo systemctl restart elasticsearch.service or sudo service elasticsearch restart
but sometimes an error come out causing elasticsearch unable to restart, if this happen, I just restart my laptop, which I wanted to avoid as it is disturbing my work
I am unable to reproduce back the error log but I will update this question once I found the log
Please provide any tips or experience if anyone have encountered this issue
I have tried installing different version of elasticsearch which is 7.16 but still same issue happen
I have tried reinstall elasticsearch but still same
Update : Elasticsearch finally crash and I am unable to restart the service, refer below image for detail and log
Elasticsearch service failed
Error during restart
Update : below is the log I able to receive when starting the elasticsearch
==> /var/log/syslog <==
Dec 8 14:26:37 farhan-Yoga-6-13ALC6 systemd[1]: Starting Elasticsearch...
==> /var/log/kern.log <==
Dec 8 14:26:47 farhan-Yoga-6-13ALC6 kernel: [16726.683629] [UFW BLOCK] IN=wlp2s0 OUT= MAC=01:00:5e:00:00:01:64:3a:ea:e9:0a:4e:08:00 SRC=0.0.0.0 DST=224.0.0.1 LEN=36 TOS=0x00 PREC=0xC0 TTL=1 ID=14037 PROTO=2
==> /var/log/syslog <==
Dec 8 14:26:47 farhan-Yoga-6-13ALC6 kernel: [16726.683629] [UFW BLOCK] IN=wlp2s0 OUT= MAC=01:00:5e:00:00:01:64:3a:ea:e9:0a:4e:08:00 SRC=0.0.0.0 DST=224.0.0.1 LEN=36 TOS=0x00 PREC=0xC0 TTL=1 ID=14037 PROTO=2
==> /var/log/ufw.log <==
Dec 8 14:26:47 farhan-Yoga-6-13ALC6 kernel: [16726.683629] [UFW BLOCK] IN=wlp2s0 OUT= MAC=01:00:5e:00:00:01:64:3a:ea:e9:0a:4e:08:00 SRC=0.0.0.0 DST=224.0.0.1 LEN=36 TOS=0x00 PREC=0xC0 TTL=1 ID=14037 PROTO=2
==> /var/log/auth.log <==
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 sudo: pam_unix(sudo:session): session closed for user root
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 sudo: farhan : TTY=pts/3 ; PWD=/var/log ; USER=root ; COMMAND=/usr/sbin/service elasticsearch restart
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 sudo: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
==> /var/log/syslog <==
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd[1]: Started Elasticsearch.
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd[1]: Stopping Elasticsearch...
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: uncaught exception in thread [process reaper (pid 46606)]
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "modifyThread")
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.security.AccessControlContext.checkPermission(AccessControlContext.java:485)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.security.AccessController.checkPermission(AccessController.java:1068)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.lang.SecurityManager.checkPermission(SecurityManager.java:411)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at org.elasticsearch.secure_sm.SecureSM.checkThreadAccess(SecureSM.java:160)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at org.elasticsearch.secure_sm.SecureSM.checkAccess(SecureSM.java:120)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.lang.Thread.checkAccess(Thread.java:2360)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.lang.Thread.setDaemon(Thread.java:2308)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.lang.ProcessHandleImpl.lambda$static$0(ProcessHandleImpl.java:103)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.<init>(ThreadPoolExecutor.java:637)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:928)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1021)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1158)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.lang.Thread.run(Thread.java:1589)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/jdk.internal.misc.InnocuousThread.run(InnocuousThread.java:186)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd[1]: elasticsearch.service: Deactivated successfully.
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd[1]: Stopped Elasticsearch.
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd[1]: elasticsearch.service: Consumed 42.311s CPU time.
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd[1]: Starting Elasticsearch...
==> /var/log/auth.log <==
Dec 8 14:27:00 farhan-Yoga-6-13ALC6 sudo: pam_unix(sudo:session): session closed for user root
==> /var/log/syslog <==
Dec 8 14:27:00 farhan-Yoga-6-13ALC6 systemd[1]: Started Elasticsearch.
Update : Able to find out the log that's causing the elasticsearch to stop working, apparently its because the ram usage is too high causing ubuntu to stop the process
Dec 8 14:30:43 farhan-Yoga-6-13ALC6 kernel: [16962.195555] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/elasticsearch.service,task=java,pid=46712,uid=128
Dec 8 14:30:43 farhan-Yoga-6-13ALC6 kernel: [16962.195728] Out of memory: Killed process 46712 (java) total-vm:16570600kB, anon-rss:8393480kB, file-rss:0kB, shmem-rss:0kB, UID:128 pgtables:17344kB oom_score_adj:0
==> /var/log/syslog <==
Dec 8 14:30:43 farhan-Yoga-6-13ALC6 kernel: [16962.194794] Monitor Deflati invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
Still unable to figure out how to make the elasticsearch use less ram

You can also run an elastic search via docker with the -e ES_JAVA_OPTS="-Xmx512m" option.
Example:
docker network create elastic
docker pull docker.elastic.co/elasticsearch/elasticsearch:8.6.1
docker run --name elasticsearch --net elastic -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xmx512m" -t docker.elastic.co/elasticsearch/elasticsearch:8.6.1

Related

/usr/sbin/mysql code=exited status 203/EXEC ERROR

I don't know exactly what I did wrong, but it's likely some 'chown' operation that I did. I was trying to allow the user&group mysql:mysql access to a /media/usb drive, but may have inadvertently changed something else.
When I do sudo systemctl start mysql.service I get an error. Upon examining with sudo systemctl status mysqld, I get the following:
mysql.service - MySQL Community Server
Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
Active: activating (start-post) (Result: exit-code) since Fri 2020-06-19 08:11:01 EDT; 19s ago
Process: 15459 ExecStart=/usr/sbin/mysqld (code=exited, status=203/EXEC)
Process: 15444 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS)
Main PID: 15459 (code=exited, status=203/EXEC); : 15460 (mysql-systemd-s)
Tasks: 2
Memory: 2.4M
CPU: 175ms
CGroup: /system.slice/mysql.service
└─control
├─15460 /bin/bash /usr/share/mysql/mysql-systemd-start post
└─15687 sleep 1
Jun 19 08:11:01 apil-dlrig systemd[1]: Starting MySQL Community Server...
Jun 19 08:11:01 apil-dlrig systemd[1]: mysql.service: Main process exited, code=exited, status=203/EXEC
When I check ownership on /var/lib/mysql, I get the following, which seems reasonable. I.e. user mysql has full ownership on this folder.
apil#apil-dlrig:~$ sudo ls -la /var/lib/mysql
total 176212
drwx------ 7 mysql mysql 4096 Jun 19 07:34 .
drwxr-xr-x 79 root root 4096 Oct 30 2019 ..
-rw-r----- 1 mysql mysql 56 Oct 20 2019 auto.cnf
-rw------- 1 mysql mysql 1680 Nov 22 2019 ca-key.pem
-rw-r--r-- 1 mysql mysql 1112 Nov 22 2019 ca.pem
-rw-r--r-- 1 mysql mysql 1112 Nov 22 2019 client-cert.pem
-rw------- 1 mysql mysql 1676 Nov 22 2019 client-key.pem
-rw-r--r-- 1 mysql mysql 0 May 5 06:38 debian-5.7.flag
drwxr-x--- 2 mysql mysql 4096 Jun 6 13:44 foo
-rw-r----- 1 mysql mysql 665 Jun 19 07:34 ib_buffer_pool
-rw-r----- 1 mysql mysql 79691776 Jun 19 07:34 ibdata1
-rw-r----- 1 mysql mysql 50331648 Jun 19 07:34 ib_logfile0
-rw-r----- 1 mysql mysql 50331648 Oct 20 2019 ib_logfile1
-rw-r----- 1 mysql mysql 155 Jun 16 07:23 keyring_backup
drwxr-x--- 2 mysql mysql 4096 May 5 06:38 mysql
-rw-r--r-- 1 mysql mysql 6 May 5 06:38 mysql_upgrade_info
drwxr-x--- 2 mysql mysql 4096 May 5 06:38 performance_schema
-rw------- 1 mysql mysql 1680 Nov 22 2019 private_key.pem
drwxr-x--- 2 mysql mysql 4096 Jun 16 07:25 prod
-rw-r--r-- 1 mysql mysql 452 Nov 22 2019 public_key.pem
-rw-r--r-- 1 mysql mysql 1112 Nov 22 2019 server-cert.pem
-rw------- 1 mysql mysql 1680 Nov 22 2019 server-key.pem
drwxr-x--- 2 mysql mysql 12288 Nov 22 2019 sys
The /etc/systemd/system/multi.user.wants.targets/mysql.service looks as the following. Nothing should've changed here, i.e. it is as default as MySQL comes.
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
Wondering what could be going wrong. Any help would be appreciated.
Thanks
If you look at the original error, the issue is: ExecStart=/usr/sbin/mysqld (code=exited, status=203/EXEC). Looks like some kind of execution privilege for the mysqld file? Checked it too:
ls -la /usr/sbin/mysqld which returned
-rw-r--r-- 1 root root 24585896 Apr 30 10:52 /usr/sbin/mysqld
So the issue (I thought) was that user root didn't have the execute permission. Looks at the first three letters rw-. The last dash means no execution privilege.
So I simply ran the following chmod 777 /usr/sbin/mysqld, after which the ownership returns as
-rwxrwxrwx 1 root root 24585896 Apr 30 10:52 /usr/sbin/mysqld
Now, systemctl start mysql.service runs just fine.
It's amazing how simply the process of writing a question on stackoverflow actually helps me solve a problem 80% of the time. Thanks again, folks.

Failed to start libvirtd.service: Unit -.mount is masked

I've removed a disk (mounted at /win-vm) from host which had a kvm volume pool. Now I can't start libvirtd
root#ws:/# systemctl start libvirtd
Failed to start libvirtd.service: Unit -.mount is masked.
Logs
root#ws:/# journalctl -u libvirtd.service
-- Logs begin at Thu 2020-02-13 17:43:58 CET, end at Thu 2020-02-13 18:25:32 CET. --
Feb 13 17:45:24 ws systemd[1]: Starting Virtualization daemon...
Feb 13 17:45:24 ws systemd[1]: Started Virtualization daemon.
Feb 13 17:45:25 ws libvirtd[1157]: libvirt version: 5.0.0, package: 4 (Guido Günther <agx#sigxcpu.org> Mon, 17 Jun 2019 19:05:40 +0200)
Feb 13 17:45:25 ws libvirtd[1157]: hostname: ws
Feb 13 17:45:25 ws libvirtd[1157]: cannot open directory '/win-vm': No such file or directory
Feb 13 17:45:25 ws libvirtd[1157]: internal error: Failed to autostart storage pool 'win-vm': cannot open directory '/win-vm': No such file or directory
Feb 13 17:45:31 ws libvirtd[1157]: Failed to open file '/sys/class/net/veth0eac943/operstate': No such file or directory
Feb 13 17:45:31 ws libvirtd[1157]: unable to read: /sys/class/net/veth0eac943/operstate: No such file or directory
Feb 13 17:45:31 ws libvirtd[1157]: Failed to open file '/sys/class/net/veth725116c/operstate': No such file or directory
Feb 13 17:45:31 ws libvirtd[1157]: unable to read: /sys/class/net/veth725116c/operstate: No such file or directory
Feb 13 17:56:42 ws libvirtd[1157]: Cannot recv data: Connection reset by peer
Feb 13 17:56:42 ws libvirtd[1157]: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': Connection refused
Feb 13 17:57:47 ws libvirtd[1157]: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': Connection refused
Feb 13 17:57:47 ws libvirtd[1157]: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': Connection refused
Feb 13 17:58:48 ws libvirtd[1157]: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': Connection refused
Feb 13 17:58:48 ws libvirtd[1157]: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': Connection refused
Feb 13 17:59:12 ws libvirtd[1157]: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': Connection refused
Feb 13 17:59:12 ws libvirtd[1157]: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': Connection refused
Feb 13 17:59:37 ws libvirtd[1157]: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': Connection refused
Feb 13 17:59:37 ws libvirtd[1157]: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': Connection refused
Feb 13 18:02:35 ws systemd[1]: Stopping Virtualization daemon...
Feb 13 18:02:35 ws systemd[1]: libvirtd.service: Succeeded.
Feb 13 18:02:35 ws systemd[1]: Stopped Virtualization daemon.
I've an empty /win-vm directory but logs say directory doesn't exist. Is there a workaround? Or can I start without checking volume pools?

Percona MySQL Server working but filling the messages log with errors

I have Percona MySQL server 5.7 running under CentOS 7 and although mysql is running without any noticeable errors, it is filling my /var/log/messages with the following every ten seconds:
Nov 15 10:07:27 server systemd: mysqld.service holdoff time over, scheduling restart.
Nov 15 10:07:27 server systemd: Starting MySQL Percona Server...
Nov 15 10:07:27 server mysqld_safe: 171115 10:07:27 mysqld_safe Adding '/usr/lib64/libjemalloc.so.1' to LD_PRELOAD for mysqld
Nov 15 10:07:27 server mysqld_safe: 171115 10:07:27 mysqld_safe Logging to '/var/lib/mysql/server.local.err'.
Nov 15 10:07:27 server mysqld_safe: 171115 10:07:27 mysqld_safe A mysqld process already exists
Nov 15 10:07:27 server systemd: mysqld.service: main process exited, code=exited, status=1/FAILURE
Nov 15 10:07:28 server systemd: Failed to start MySQL Percona Server.
Nov 15 10:07:28 server systemd: Unit mysqld.service entered failed state.
Nov 15 10:07:28 server systemd: Triggering OnFailure= dependencies of mysqld.service.
Nov 15 10:07:28 server systemd: mysqld.service failed.
Nov 15 10:07:28 server systemd: Started Service Status Monitor.
Nov 15 10:07:28 server systemd: Starting Service Status Monitor...
Even though it's stating in there that it failed to start the Percona server, it appears to be working as my website is still doing mysql queries. I know very little about mysql admin and was hoping a mysql guru could shed some light on what is happening.
The clue is here: "A mysqld process already exists". It can't start mysqld because another mysqld process is already running, and using the same port. You need to kill that process before the one you tried to start can start.
Re your comment:
Since this is CentOS 7, I assume mysql.service is being called by systemd.
In my experience, if you start mysqld "ad hoc" without using systemd, then systemd has no idea that it's running, and tries to start mysqld on its own. Systemd also cannot shut down an instance of mysqld unless it started that instance.
the mysqld process is active,ps -ef |grep mysqld ,kill -9 {}

Can not run "Percona XtraDB Cluster 5.7" on Centos 7

I want used "Percona XtraDB Cluster 5.7".
So I installed "Percona XtraDB Cluster" by official guide.
https://www.percona.com/doc/percona-xtradb-cluster/5.7/install/yum.html#yum
But when I add nodes to cluster, my mysql coudn't run.
$ sudo service mysql start
Job for mysql.service failed. See 'systemctl status mysql.service' and 'journalctl -xn' for details.
$ sudo cat /var/log/messages
...
Jan 13 04:19:25 localhost mysqld_safe: 2017-01-12T19:19:25.588436Z mysqld_safe Skipping wsrep-recover for empty datadir: /var/lib/mysql
Jan 13 04:19:25 localhost mysqld_safe: 2017-01-12T19:19:25.590385Z mysqld_safe Assigning 00000000-0000-0000-0000-000000000000:-1 to wsrep_start_position
Jan 13 04:19:27 localhost mysql-systemd: State transfer in progress, setting sleep higher
Jan 13 04:19:40 localhost mysqld_safe: 2017-01-12T19:19:40.723030Z mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
Jan 13 04:19:47 localhost mysql-systemd: /usr/bin/mysql-systemd: 137 行: kill: (19791) - そのようなプロセスはありません
Jan 13 04:19:47 localhost mysql-systemd: ERROR! mysqld_safe with PID 19791 has already exited: FAILURE
Jan 13 04:19:47 localhost systemd: mysql.service: control process exited, code=exited status=1
Jan 13 04:19:47 localhost mysql-systemd: WARNING: mysql pid file /var/run/mysqld/mysqld.pid empty or not readable
Jan 13 04:19:47 localhost mysql-systemd: ERROR! mysql already dead
Jan 13 04:19:47 localhost systemd: mysql.service: control process exited, code=exited status=2
...
I used centos7 on 3 virtual machine.
And my /etc/my.cnf is default except following code.
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_cluster_name=pxc-cluster
wsrep_cluster_address=gcomm://192.168.70.61,192.168.70.62,192.168.70.63
wsrep_node_name=pxc1
wsrep_node_address=192.168.70.61
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth=sstuser:passw0rd
pxc_strict_mode=ENFORCING
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
What should I check?
For the first node, you should bootstrap it, instead of just starting it. On CentOS7 you can do so with:
systemctl start mysql#bootstrap.service
Then, you can go ahead and start nodes 02 and 03 normally:
systemctl start mysql

SSH service broken after sshd_config was modified

I modified the sshd_config but the ssh service became unavailable. How do I reset the config?
Here is the log:
Jul 29 14:10:03 bye sshd[578]: /etc/ssh/sshd_config line 6: Badly formatted port number.
Jul 29 14:10:03 bye systemd[1]: ssh.service: main process exited, code=exited, status=255/n/a
Jul 29 14:10:03 bye systemd[1]: Unit ssh.service entered failed state.
Jul 29 14:10:04 bye systemd[1]: ssh.service holdoff time over, scheduling restart.
Jul 29 14:10:04 bye systemd[1]: Stopping OpenBSD Secure Shell server...
Jul 29 14:10:04 bye systemd[1]: Starting Google Compute Engine VM initialization...
Jul 29 14:10:04 bye systemd[1]: Started Google Compute Engine VM initialization.
Jul 29 14:10:04 bye systemd[1]: Starting OpenBSD Secure Shell server...
Jul 29 14:10:04 bye systemd[1]: Started OpenBSD Secure Shell server.
Jul 29 14:10:04 bye sshd[582]: /etc/ssh/sshd_config line 6: Badly formatted port number.
Jul 29 14:10:04 bye systemd[1]: ssh.service: main process exited, code=exited, status=255/n/a
Jul 29 14:10:04 bye systemd[1]: Unit ssh.service entered failed state.
Jul 29 14:10:04 bye systemd[1]: ssh.service holdoff time over, scheduling restart.
Since you don't have access to your instance and you need to modify the sshd_config file you can either:
Delete the instance keeping the boot disk, attach it to another instance as secondary, modify the sshd_config file, detach the disk and then create a new instance using that disk.
Or you can modify the sshd_config file using the following startup-script:
#!/bin/bash
/bin/sed -i.bak 's/^Port .*/Port 22/g' /etc/ssh/sshd_config
This startup-script will modify the line that starts with "Port" to "Port 22". Also it will create a backup at /etc/ssh/sshd_config.bak.
After updating the instance metadata with the startup-script you need to reboot the instance because startup-scripts are executed when the instances boot up. Once you gain access to the instance, remove the script in order to avoid executing it to no avail.
I hope it helps.