Output from a curl command using system.run - zabbix

I have an active Zabbix item running a curl command on my server.
Key:
system.run[curl http://localhost:8080/mypage]
When I run this curl command manually the output is a number, but in Zabbix I get:
Output:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 3 100 3 0 0 434 0 --:--:-- --:--:-- --:--:-- 500
146
The only thing I want to see is the '146' at the end, can I stop Zabbix from outputting the other information.
Preferably I would like the data type to be numeric - decimal but I'm having to set it to text for the item to work. Am I going about this the wrong way?

Add -s (--silent) parameter to curl, like so:
system.run[curl -s http://localhost:8080/mypage]

Related

how to upload the result using .xml file in testrail byusing testrail API

I am executing automated tests in C# and have generated result file using JunitXml.TestLogger nugget package.
I would like to upload the these test result using directly the result xml file .
I have 1 example in python on gurouk blog. https://blog.gurock.com/test-automation-step-four/
I would like to do the same using curl.
if I do it like curl -H "Content-Type: application/json" -u "username:password" "https://testrailurl//index.php?/api/v2/add_results_for_cases/1111" -f "Junitresults.xml"
I get the error
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 60 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
curl: (22) The requested URL returned error: 404
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (6) Could not resolve host: Junitresults.xml

Linux web server configuration for AJAX web page highly consumed CPU [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I have www service with chat used for many users at same time based on AJAX which consumes a lot of CPU resources and needs many database connections. I’m asking about server configuration for such scenario for Debian 9 with MariaDB and Apache and PHP 5.6 (7.x not working with my service). I increased parameter Maximum No. of Connections (max_used_connections in my.cnf) to 2000: max_connections = 2000 to avoid database error.
Which also parameter should I change to increase productivity of my server and decrease chat delay?
I have made tests: simultaneously I opened 200 chat window on one browser (Chrome 76.0.3809.100) and I have see about 60 second delay (it shoud be 1 second due to php code) and I have observe that my server have only 21 web processes. How I can improve in server configuration?
Below is output for command "ps aux" from server console, which I use to see CPU usage for processes
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
postfix 21107 0 0 83704 6816 ? S 22:12 00:00 pickup
root 25493 0 0 0 0 ? I 22:58 00:00 [kworker/u16:1-e]
root 26082 0 0 0 0 ? I 21:46 00:00 [kworker/6:0-eve]
root 26094 0 0 0 0 ? I 21:46 00:00 [kworker/7:2-cgr]
root 26253 0 0 0 0 ? I 21:47 00:00 [kworker/4:1-eve]
root 26417 0 0 0 0 ? I 21:47 00:00 [kworker/2:0-eve]
www-data 26580 0 0 519484 10732 ? S 21:48 00:00 php-fpm:
www-data 26581 0 0 519484 10732 ? S 21:48 00:00 php-fpm:
root 26585 0 0 0 0 ? I 21:48 00:00 [kworker/1:0-eve]
root 27185 0 0 0 0 ? I 21:51 00:00 [kworker/0:1-eve]
root 28031 0 0 0 0 ? I 23:00 00:00 [kworker/2:2-eve]
postfix 30240 0 0 83704 6800 ? S 23:00 00:00 anvil
root 1029 0,1 0 658632 18944 ? Sl 20:32 00:09 /usr/bin/python3
clamav 734 0,2 2,4 1008592 787196 ? Ssl 20:32 00:25 /usr/sbin/clamd
www-data 9793 3,8 0,3 605264 114312 ? S 22:05 02:22 /usr/sbin/apache2
www-data 10709 3,9 0,3 853140 120500 ? S 22:06 02:25 /usr/sbin/apache2
www-data 10668 4,3 0,3 597996 104668 ? S 22:06 02:40 /usr/sbin/apache2
www-data 9883 4,5 0,3 597060 105312 ? S 22:05 02:47 /usr/sbin/apache2
www-data 9791 4,6 0,3 604864 115324 ? R 22:05 02:52 /usr/sbin/apache2
www-data 9792 5,1 0,3 598828 108416 ? S 22:05 03:07 /usr/sbin/apache2
www-data 10073 5,4 0,3 599192 105860 ? S 22:05 03:18 /usr/sbin/apache2
www-data 10363 5,6 0,3 844492 113288 ? S 22:06 03:25 /usr/sbin/apache
www-data 9794 5,8 0,3 851780 120412 ? S 22:05 03:34 /usr/sbin/apache2
www-data 9795 6 0,3 599708 108424 ? S 22:05 03:42 /usr/sbin/apache2
www-data 9822 6,1 0,3 602568 109008 ? S 22:05 03:45 /usr/sbin/apache2
www-data 10710 6,3 0,3 598836 105540 ? S 22:06 03:52 /usr/sbin/apache2
www-data 27116 7,2 0,3 608036 114096 ? S 22:44 01:37 /usr/sbin/apache2
mysql 946 9 0,6 958820 221472 ? Ssl 20:32 14:01 /usr/sbin/mysqld
www-data 27013 9,5 0,3 598352 104688 ? S 22:44 02:08 /usr/sbin/apache2
www-data 27119 10,5 0,3 602800 110460 ? S 22:44 02:21 /usr/sbin/apache2
www-data 27054 10,6 0,3 596540 104216 ? S 22:44 02:23 /usr/sbin/apache2
www-data 27117 10,8 0,3 596612 103268 ? S 22:44 02:25 /usr/sbin/apache2
Your environment can be enhanced by the following:
Check server's hardware and make sure about the requirements for RAM and CPU.
Use Mysqltuner script to tune database engine performance.
Check your queries and try to enhance them.
Install cache modules on Apache like: memcached and apcu.
Use PHP-FPM as it's recommended for better performance.
And then you need to make benchmark test to check if hardware specifications and configurations is suitable for your environment or not.
There are many tools for benchmark test like: ab - Apache HTTP benchmarking Tool, or any other online tools.

"Exception: No data to insert" while importing dataset into Clickhouse under Docker

I'm trying to play with Clickhouse using this manual. I've set up docker image. I've also successfully created a table:
CREATE TABLE tax_bills_nyc
(
bbl Int64,
owner_name String,
address String,
tax_class String,
tax_rate String,
emv Float64,
tbea Float64,
bav Float64,
tba String,
property_tax String,
condonumber String,
condo String,
insertion_date DateTime MATERIALIZED now()
)
ENGINE = MergeTree
PARTITION BY tax_class
ORDER BY owner_name
Ok.
I quit Clickhouse client and checked Docker container is up:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
55991995335b yandex/clickhouse-server "/entrypoint.sh" About an hour ago Up About an hour 8123/tcp, 9000/tcp, 9009/tcp some-clickhouse-server
I try to import sample dataset with the following command:
curl -X GET 'http://taxbills.nyc/tax_bills_june15_bbls.csv' | docker run --rm --link some-clickhouse-server:clickhouse-server yandex/clickhouse-client --host clickhouse-server --input_format_allow_errors_num=10 --query="INSERT INTO test_database.tax_bills_nyc FORMAT CSV"
And I get the following error:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 176M 0 2641 0 0 7494 0 6:50:28 --:--:-- 6:50:28 7481Code: 108. DB::Exception: No data to insert
0 176M 0 94321 0 0 35245 0 1:27:16 0:00:02 1:27:14 35233
curl: (23) Failed writing body (0 != 16384)
What could be the reason and how to fix that?
I am a bit confused, because docker run is used for running (starting not existed) container. I am not sure, that docker run is what you suppose to do after successfully check your container is running.
Instead of docker run you should use
docker exec -i <container-id-or-name>
So, your line should be:
curl -X GET 'http://taxbills.nyc/tax_bills_june15_bbls.csv' | docker exec -i some-clickhouse-server --query="INSERT INTO test_database.tax_bills_nyc FORMAT CSV"
You could always find info about docker commands in official documentation

Mysql connect host change every time in the docker container,why?

In the docker container.I try to login the host mysql server. First the host ip is changed,so confused for me.But second login success. Anyone can explain this strange happening?
I login ip is 192.168.100.164,but the error info shows ip 172.18.0.4,which is the container localhost.
More info:
root#b67c39311dbb:~/project# netstat -nr
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 172.18.0.1 0.0.0.0 UG 0 0 0 eth0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
root#b67c39311dbb:~/project# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.4 netmask 255.255.0.0 broadcast 172.18.255.255
ether 02:42:ac:12:00:04 txqueuelen 0 (Ethernet)
RX packets 2099 bytes 2414555 (2.4 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1752 bytes 132863 (132.8 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 35 bytes 3216 (3.2 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 35 bytes 3216 (3.2 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Try adding --add-host="localhost:192.168.100.164" when launching docker run. But this is not a good practice to my mind. You should move your mysql database to another container and create a network between them
That is true, when we start docker container it is get their own ip for container. You need to map host port with docker container. then, when you try to connect host port it redirect to myssql docker container. Please look at https://docs.docker.com/config/containers/container-networking/
I'd suggest you to create a docker bridged network and then create your container using the --add-host as suggested by Alexey.
In a simple script:
DOCKER_NET_NAME='the_docker_network_name'
DOCKER_GATEWAY='172.100.0.1'
docker network create \
--driver bridge \
--subnet=172.100.0.0/16 \
--gateway=$DOCKER_GATEWAY \
$DOCKER_NET_NAME
docker run -d \
--add-host db.onthehost:$DOCKER_GATEWAY \
--restart=always \
--network=$DOCKER_NET_NAME \
--name=magicContainerName \
yourImage:latest
EDIT: creating a network will also simplify the communication among containers (if you plan to have it in the future) since you'll be able to use the container names instead of their IP.

Run a bash script one and stop

I have this simple script that checks if mysql on remote servers (db-test-1 and db-test-2) is in SST mode and sends a message to a slack channel. The script is running on a third server dedicated for running cron jobs. Here is the code below:
#!/bin/bash
time=$(date);
array=( db-test-1 db-test-2 )
for i in "${array[#]}"
do
S=$(ssh $i ps -ef |grep mysql | grep wsrep_sst_xtrabackup-v2);
if [[ "$S" != "" ]]; then
curl -X POST --data-urlencode "payload={\"channel\": \"#db-share-test\", \"username\": \"wsrep_local_state_comment\", \"text\": \"*$i*: ${time}\n>State transfer in progress, setting sleep higher mysqld\", \"icon_emoji\": \":scorpion:\"}" https://hooks.slack.com/services/G824ZJS9N/B6QS5JEKP/ZjV1hmM1k4dZGsf9HDC1o1jd
exit 0
else
curl -X POST --data-urlencode "payload={\"channel\": \"#db-share-test\", \"username\": \"wsrep_local_state_comment\", \"text\": \"*$i*: ${time}\n>State transfer is complete. Server is Synced now.\", \"icon_emoji\": \":scorpion:\"}" https://hooks.slack.com/services/G824ZJS9N/B6QS5JEKP/ZjV1hmM1k4dZGsf9HDC1o1jd
exit 2
fi
done
The two servers, db-test1 and db-test-2 are part of a PXC cluster. So when i start db-test-1 in SST to join the cluster, i get the following in my slack channel as expected:
*db-test-1*: Sun Aug 27 15:12:44 CST 2017
>State transfer in progress, setting sleep higher mysqld
[3:12]
*db-test-1*: Sun Aug 27 15:12:49 CST 2017
State transfer in progress, setting sleep higher mysqld
[3:12]
*db-test-1*: Sun Aug 27 15:12:51 CST 2017
State transfer in progress, setting sleep higher mysqld
[3:12]
*db-test-1*: Sun Aug 27 15:12:54 CST 2017
State transfer in progress, setting sleep higher mysqld
So the results are being displayed approximately every 3 seconds. However, the cron job executing this script is scheduled to run every minute, hence not sure why it is sending results every 3 seconds or so as shown above.
How can i ensure that the results are displayed every 1 minute to avoid my channel being thrown the same message every 3 seconds? Also, how can i make sure that when the SST is finished, a single message to slack to indicate that the state transfer is finished instead of sending this message none-stop every time the two db servers are not in SST mode?
Besides checking that the cron is properly set, probably something like:
#every_minute /path/to/script
or
*/1 * * * * /path/to/script
It could be also good to ensure that only one occurrence of the program is running, try adding this to your script used within the cron job
*/1 * * * * pgrep script > /dev/null || /path/to/script
or by using something like:
#!/bin/sh
if ps -ef | grep -v grep | grep your_script_name ; then
exit 0
fi
# your code goes below
# ...