Ambari hadoop cluster + the best way to Modify configurations - json

We have ambari cluster , and clients are installed on Linux redhat machines ,
yum list | grep ambari-server
ambari-server.x86_64 2.5.0.3-7 #ambari-2.5.0.3
We found a nice way to set a update a value's in ambari cluster as the following:
Update a parameter ( from Ambari server machine )
/var/lib/ambari-server/resources/scripts/configs.sh set localhost c1 mapred-site "mapreduce.map.memory.mb" "512"
While:
CONFIG_TYPE = mapred-site
CONFIG_KEY = mapreduce.map.memory.mb
But we have a little problem here.....:
From my example - mapred-site is a “CONFIG-TYPE”
According to the script –help:
<CONFIG_TYPE>: One of the various configuration types in Ambari. Ex:global, core-site, hdfs-site, mapred-queue-acls, etc.
So how to know the right CONFIG_TYPE value for the CONFIG_KEY value ?,
For more info about the script:
https://cwiki.apache.org/confluence/display/AMBARI/Modify+configurations “Edit configuration using configs.sh” paragraph
remark - in order to see all CONFIG-TYPE values and CONFIG_KEY values I generated the following blueprint.json file:
curl -u admin:admin -H "X-Requested-By: ambari" -X GET http://101.16.235.2:8080/api/v1/clusters/HDP01?format=blueprint -o /tmp/blueprint.json
.
grep "\-site" /tmp/blueprint.json
"tez-interactive-site" : {
"hdfs-site" : {
"yarn-site" : {
"hiveserver2-site" : {
"ams-hbase-security-site" : {
"ams-site" : {
"mapred-site" : {
"hive-site" : {
"tez-site" : {
"webhcat-site" : {

You may also clone Ambari repo, and grep/parse configuration files like
https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/core-site.xml
Per-stack configurations like https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.6/services/HDFS/configuration/core-site.xml inherit/override values from common-services and previous stack versions.
Hope this helps

Related

Get Node attributes via knife

I have a requirement where I need to get hostname, memory, cores, storage, packages installed for multiple nodes(~1k).
I have approached the solution by using knife.
$ knife search node 'hostname:HostName1 OR hostname:HostName2 OR hostname:HostName3' -a hostname -a cpu.cores -a memory.total -a rpm -a filesystem.by_device -F j|jq '.'
And the typical output of this command is like;
{
"results": 3,
"rows": [
{
"MyHostName1": {
"hostname": "MyHostName1",
"cpu.cores": 4,
"memory.total": "15645184kB",
"rpm": {
"loger-multipath": [
{
"version": "0.4.9",
"release": "123.el7",
"arch": "x86_64"
}
],
"python": [
{
"version": "7.19.0",
"release": "19.el7",
"arch": "x86_64"
}
]
},
"filesystem.by_device": {
"/apps/logger/root_my-root": {
"kb_size": "8125880",
"kb_used": "2426760",
"kb_available": "5263308",
"percent_used": "32%",
"mount_options": [
"rw",
"discard",
"data=ordered"
],
"uuid": "87ujrf56-6yu6-654r-yu43-uy67yg43ws67",
"mounts": [
"/"
]
}
}
}
}
}
However, there are details which I do not need;
How can we set the display sequence same as that of the attribute list in the command, i.e. hostname then core, memory…
We get the file system names and their corresponding sizes, however, we are getting all the other tag values as well; how can we get just the file system name and the ( something similar to what we get from the df command; e.g. apps/logger/root_vg-apps: kb_size: 3997376 )
The output of the rpm attribute gives us the rpm package name, architecture, version and release information, how can we concatenate the output of multiple attributes in a single line ( something similar to the output when we run the yum list installed command; e.g loger-multipath.x86_64 0.4.9-123.el7 )
EDIT:
After much googling this is the progress:
knife search node 'HostName1 OR hostname:HostName2' -a cpu.cores -a memory.total -a filesystem.by_device -F j|jq '.rows[]|keys[] as $hostName|"\($hostName),\(.[$hostName]|."cpu.cores"),\(.[$hostName]|."memory.total"),\(.[$hostName]|."filesystem.by_device")"'
And the corresponding output
"HostName1,4,15645184kB,{\"/dev/mapper/root_vg-root\":{\"kb_size\":\"8125880\",\"kb_used\":\"2425220\",\"kb_available\":\"5264848\",\"percent_used\":\"32%\",\"total_inodes\":\"524288\",\"inodes_used\":\"88441\",\"inodes_available\":\"435847\",\"inodes_percent_used\":\"17%\",\"fs_type\":\"ext4\",\"mount_options\":[\"rw\",\"relatime\",\"seclabel\",\"discard\",\"data=ordered\"],\"uuid\":\"rthd-762c-41affff8-8927-065fsee20853c681\",\"mounts\":[\"/\"]},\"devtmpfs\":{\"kb_size\":\"7810756\",\"kb_used\":\"0\",\"kb_available\":\"7810756\",\"percent_used\":\"0%\",\"total_inodes\":\"1952689\",\"inodes_used\":\"403\",\"inodes_available\":\"1952286\",\"inodes_percent_used\":\"1%\",\"fs_type\":\"devtmpfs\",\"mount_options\":[\"rw\",\"nosuid\",\"seclabel\",\"size=7810756k\",\"nr_inodes=1952689\",\"mode=755\"],\"mounts\":[\"/dev\"]},\"tmpfs\":{\"kb_size\":\"1564520\",\"kb_used\":\"0\",\"kb_available\":\"1564520\",\"percent_used\":\"0%\",\"total_inodes\":\"1955648\",\"inodes_used\":\"1\",\"inodes_available\":\"1955647\",\"inodes_percent_used\":\"1%\",\"fs_type\":\"tmpfs\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"relatime\",\"seclabel\",\"size=1564520k\",\"mode=700\",\"uid=627000\",\"gid=161\"],\"mounts\":[\"/dev/shm\",\"/run\",\"/sys/fs/cgroup\",\"/run/user/0\",\"/run/user/627000\"]},\"/dev/sda1\":{\"kb_size\":\"499656\",\"kb_used\":\"212068\",\"kb_available\":\"250892\",\"percent_used\":\"46%\",\"total_inodes\":\"32768\",\"inodes_used\":\"350\",\"inodes_available\":\"32418\",\"inodes_percent_used\":\"2%\",\"fs_type\":\"ext4\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"relatime\",\"seclabel\",\"data=ordered\"],\"uuid\":\"857fgg-b2a2-42d8-9db2-dfrferf7544\",\"mounts\":[\"/boot\"]},\"/dev/mapper/root_vg-var\":{\"kb_size\":\"5029504\",\"kb_used\":\"4142128\",\"kb_available\":\"608848\",\"percent_used\":\"88%\",\"total_inodes\":\"327680\",\"inodes_used\":\"6191\",\"inodes_available\":\"321489\",\"inodes_percent_used\":\"2%\",\"fs_type\":\"ext4\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"relatime\",\"seclabel\",\"discard\",\"noacl\",\"stripe=16\",\"data=ordered\"],\"uuid\":\"dfef155456-ab4c-48f4-a7a5-5454sfdf\",\"mounts\":[\"/var\"]},\"/dev/mapper/root_vg-var--tmp\":{\"kb_size\":\"1998672\",\"kb_used\":\"6180\",\"kb_available\":\"1871252\",\"percent_used\":\"1%\",\"total_inodes\":\"131072\",\"inodes_used\":\"21\",\"inodes_available\":\"131051\",\"inodes_percent_used\":\"1%\",\"fs_type\":\"ext4\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"relatime\",\"seclabel\",\"discard\",\"stripe=16\",\"data=ordered\"],\"uuid\":\"vfghhhht542-ea7c-4c8b-9afd-frfgvbbn\",\"mounts\":[\"/var/tmp\"]},\"/dev/mapper/root_vg-apps\":{\"kb_size\":\"3997376\",\"kb_used\":\"495044\",\"kb_available\":\"3276236\",\"percent_used\":\"14%\",\"total_inodes\":\"262144\",\"inodes_used\":\"3203\",\"inodes_available\":\"258941\",\"inodes_percent_used\":\"2%\",\"fs_type\":\"ext4\",\"mount_options\":[\"rw\",\"nodev\",\"relatime\",\"seclabel\",\"stripe=16\",\"data=ordered\"],\"uuid\":\"fvfbvfbv55444-a813-4d9c-a9ac-7d50cfbfe345\",\"mounts\":[\"/apps\"]},\"/dev/mapper/root_vg-kdump\":{\"kb_size\":\"1998672\",\"kb_used\":\"6144\",\"kb_available\":\"1871288\",\"percent_used\":\"1%\",\"total_inodes\":\"131072\",\"inodes_used\":\"11\",\"inodes_available\":\"131061\",\"inodes_percent_used\":\"1%\",\"fs_type\":\"ext4\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"relatime\",\"seclabel\",\"discard\",\"stripe=16\",\"data=ordered\"],\"uuid\":\"frgrghg55-9673-47f5-aaac-4g4g4g1g1\",\"mounts\":[\"/kdump\"]},\"/dev/mapper/root_vg-home\":{\"kb_size\":\"1998672\",\"kb_used\":\"6544\",\"kb_available\":\"1870888\",\"percent_used\":\"1%\",\"total_inodes\":\"131072\",\"inodes_used\":\"83\",\"inodes_available\":\"130989\",\"inodes_percent_used\":\"1%\",\"fs_type\":\"ext4\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"relatime\",\"seclabel\",\"discard\",\"stripe=16\",\"data=ordered\"],\"uuid\":\"frbj4874-fe4d-4f82-ad86-41554ffv\",\"mounts\":[\"/home\"]},\"/dev/mapper/root_vg-tmp\":{\"kb_size\":\"1998672\",\"kb_used\":\"7916\",\"kb_available\":\"1869516\",\"percent_used\":\"1%\",\"total_inodes\":\"131072\",\"inodes_used\":\"51\",\"inodes_available\":\"131021\",\"inodes_percent_used\":\"1%\",\"fs_type\":\"ext4\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"relatime\",\"seclabel\",\"discard\",\"stripe=16\",\"data=ordered\"],\"uuid\":\"a995fd05-90a8-46a8-a192-0a02f68e476a\",\"mounts\":[\"/tmp\"]},\"/dev/mapper/root_vg-gcis2\":{\"kb_size\":\"20511312\",\"kb_used\":\"3191304\",\"kb_available\":\"16255048\",\"percent_used\":\"17%\",\"total_inodes\":\"1310720\",\"inodes_used\":\"33967\",\"inodes_available\":\"1276753\",\"inodes_percent_used\":\"3%\",\"fs_type\":\"ext4\",\"mount_options\":[\"rw\",\"relatime\",\"seclabel\",\"stripe=16\",\"data=ordered\"],\"uuid\":\"fb4bb78a-7f33-47f1-87a6-dcbe50bc6349\",\"mounts\":[\"/apps/gcis2\"]},\"sysfs\":{\"fs_type\":\"sysfs\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"noexec\",\"relatime\",\"seclabel\"],\"mounts\":[\"/sys\"]},\"proc\":{\"fs_type\":\"proc\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"noexec\",\"relatime\"],\"mounts\":[\"/proc\"]},\"securityfs\":{\"fs_type\":\"securityfs\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"noexec\",\"relatime\"],\"mounts\":[\"/sys/kernel/security\"]},\"devpts\":{\"fs_type\":\"devpts\",\"mount_options\":[\"rw\",\"nosuid\",\"noexec\",\"relatime\",\"seclabel\",\"gid=5\",\"mode=620\",\"ptmxmode=000\"],\"mounts\":[\"/dev/pts\"]},\"cgroup\":{\"fs_type\":\"cgroup\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"noexec\",\"relatime\",\"seclabel\",\"net_prio\",\"net_cls\"],\"mounts\":[\"/sys/fs/cgroup/systemd\",\"/sys/fs/cgroup/perf_event\",\"/sys/fs/cgroup/blkio\",\"/sys/fs/cgroup/freezer\",\"/sys/fs/cgroup/cpu,cpuacct\",\"/sys/fs/cgroup/memory\",\"/sys/fs/cgroup/pids\",\"/sys/fs/cgroup/hugetlb\",\"/sys/fs/cgroup/cpuset\",\"/sys/fs/cgroup/devices\",\"/sys/fs/cgroup/net_cls,net_prio\"]},\"pstore\":{\"fs_type\":\"pstore\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"noexec\",\"relatime\"],\"mounts\":[\"/sys/fs/pstore\"]},\"configfs\":{\"fs_type\":\"configfs\",\"mount_options\":[\"rw\",\"relatime\"],\"mounts\":[\"/sys/kernel/config\"]},\"selinuxfs\":{\"fs_type\":\"selinuxfs\",\"mount_options\":[\"rw\",\"relatime\"],\"mounts\":[\"/sys/fs/selinux\"]},\"systemd-1\":{\"fs_type\":\"autofs\",\"mount_options\":[\"rw\",\"relatime\",\"fd=30\",\"pgrp=1\",\"timeout=0\",\"minproto=5\",\"maxproto=5\",\"direct\",\"pipe_ino=16127\"],\"mounts\":[\"/proc/sys/fs/binfmt_misc\"]},\"debugfs\":{\"fs_type\":\"debugfs\",\"mount_options\":[\"rw\",\"relatime\"],\"mounts\":[\"/sys/kernel/debug\"]},\"hugetlbfs\":{\"fs_type\":\"hugetlbfs\",\"mount_options\":[\"rw\",\"relatime\",\"seclabel\"],\"mounts\":[\"/dev/hugepages\"]},\"mqueue\":{\"fs_type\":\"mqueue\",\"mount_options\":[\"rw\",\"relatime\",\"seclabel\"],\"mounts\":[\"/dev/mqueue\"]},\"binfmt_misc\":{\"fs_type\":\"binfmt_misc\",\"mount_options\":[\"rw\",\"relatime\"],\"mounts\":[\"/proc/sys/fs/binfmt_misc\"]},\"fusectl\":{\"fs_type\":\"fusectl\",\"mount_options\":[\"rw\",\"relatime\"],\"mounts\":[\"/sys/fs/fuse/connections\"]},\"/dev/fd0\":{},\"/dev/sda\":{},\"/dev/sda2\":{\"fs_type\":\"LVM2_member\",\"uuid\":\"zrtcBv-4y6D-LG2a-Wt6M-h18d-K0BQ-zyl11h\"},\"/dev/mapper/root_vg-swap\":{\"fs_type\":\"swap\",\"uuid\":\"vfbvejgbg5456454-93a1-4a67-b33f-gtrbbbn\"},\"/dev/mapper/root_vg-pool0_tmeta\":{},\"/dev/mapper/root_vg-pool0-tpool\":{},\"/dev/mapper/root_vg-pool0\":{},\"/dev/mapper/root_vg-pool0_tdata\":{},\"rootfs\":{\"fs_type\":\"rootfs\",\"mount_options\":[\"rw\"],\"mounts\":[\"/\"]}}"
"HostName2,4,15645184kB,{\"/dev/mapper/root_vg-root\":{\"kb_size\":\"8125880\",\"kb_used\":\"2425220\",\"kb_available\":\"5264848\",\"percent_used\":\"32%\",\"total_inodes\":\"524288\",\"inodes_used\":\"88441\",\"inodes_available\":\"435847\",\"inodes_percent_used\":\"17%\",\"fs_type\":\"ext4\",\"mount_options\":[\"rw\",\"relatime\",\"seclabel\",\"discard\",\"data=ordered\"],\"uuid\":\"rthd-762c-41affff8-8927-065fsee20853c681\",\"mounts\":[\"/\"]},\"devtmpfs\":{\"kb_size\":\"7810756\",\"kb_used\":\"0\",\"kb_available\":\"7810756\",\"percent_used\":\"0%\",\"total_inodes\":\"1952689\",\"inodes_used\":\"403\",\"inodes_available\":\"1952286\",\"inodes_percent_used\":\"1%\",\"fs_type\":\"devtmpfs\",\"mount_options\":[\"rw\",\"nosuid\",\"seclabel\",\"size=7810756k\",\"nr_inodes=1952689\",\"mode=755\"],\"mounts\":[\"/dev\"]},\"tmpfs\":{\"kb_size\":\"1564520\",\"kb_used\":\"0\",\"kb_available\":\"1564520\",\"percent_used\":\"0%\",\"total_inodes\":\"1955648\",\"inodes_used\":\"1\",\"inodes_available\":\"1955647\",\"inodes_percent_used\":\"1%\",\"fs_type\":\"tmpfs\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"relatime\",\"seclabel\",\"size=1564520k\",\"mode=700\",\"uid=627000\",\"gid=161\"],\"mounts\":[\"/dev/shm\",\"/run\",\"/sys/fs/cgroup\",\"/run/user/0\",\"/run/user/627000\"]},\"/dev/sda1\":{\"kb_size\":\"499656\",\"kb_used\":\"212068\",\"kb_available\":\"250892\",\"percent_used\":\"46%\",\"total_inodes\":\"32768\",\"inodes_used\":\"350\",\"inodes_available\":\"32418\",\"inodes_percent_used\":\"2%\",\"fs_type\":\"ext4\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"relatime\",\"seclabel\",\"data=ordered\"],\"uuid\":\"857fgg-b2a2-42d8-9db2-dfrferf7544\",\"mounts\":[\"/boot\"]},\"/dev/mapper/root_vg-var\":{\"kb_size\":\"5029504\",\"kb_used\":\"4142128\",\"kb_available\":\"608848\",\"percent_used\":\"88%\",\"total_inodes\":\"327680\",\"inodes_used\":\"6191\",\"inodes_available\":\"321489\",\"inodes_percent_used\":\"2%\",\"fs_type\":\"ext4\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"relatime\",\"seclabel\",\"discard\",\"noacl\",\"stripe=16\",\"data=ordered\"],\"uuid\":\"dfef155456-ab4c-48f4-a7a5-5454sfdf\",\"mounts\":[\"/var\"]},\"/dev/mapper/root_vg-var--tmp\":{\"kb_size\":\"1998672\",\"kb_used\":\"6180\",\"kb_available\":\"1871252\",\"percent_used\":\"1%\",\"total_inodes\":\"131072\",\"inodes_used\":\"21\",\"inodes_available\":\"131051\",\"inodes_percent_used\":\"1%\",\"fs_type\":\"ext4\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"relatime\",\"seclabel\",\"discard\",\"stripe=16\",\"data=ordered\"],\"uuid\":\"vfghhhht542-ea7c-4c8b-9afd-frfgvbbn\",\"mounts\":[\"/var/tmp\"]},\"/dev/mapper/root_vg-apps\":{\"kb_size\":\"3997376\",\"kb_used\":\"495044\",\"kb_available\":\"3276236\",\"percent_used\":\"14%\",\"total_inodes\":\"262144\",\"inodes_used\":\"3203\",\"inodes_available\":\"258941\",\"inodes_percent_used\":\"2%\",\"fs_type\":\"ext4\",\"mount_options\":[\"rw\",\"nodev\",\"relatime\",\"seclabel\",\"stripe=16\",\"data=ordered\"],\"uuid\":\"fvfbvfbv55444-a813-4d9c-a9ac-7d50cfbfe345\",\"mounts\":[\"/apps\"]},\"/dev/mapper/root_vg-kdump\":{\"kb_size\":\"1998672\",\"kb_used\":\"6144\",\"kb_available\":\"1871288\",\"percent_used\":\"1%\",\"total_inodes\":\"131072\",\"inodes_used\":\"11\",\"inodes_available\":\"131061\",\"inodes_percent_used\":\"1%\",\"fs_type\":\"ext4\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"relatime\",\"seclabel\",\"discard\",\"stripe=16\",\"data=ordered\"],\"uuid\":\"frgrghg55-9673-47f5-aaac-4g4g4g1g1\",\"mounts\":[\"/kdump\"]},\"/dev/mapper/root_vg-home\":{\"kb_size\":\"1998672\",\"kb_used\":\"6544\",\"kb_available\":\"1870888\",\"percent_used\":\"1%\",\"total_inodes\":\"131072\",\"inodes_used\":\"83\",\"inodes_available\":\"130989\",\"inodes_percent_used\":\"1%\",\"fs_type\":\"ext4\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"relatime\",\"seclabel\",\"discard\",\"stripe=16\",\"data=ordered\"],\"uuid\":\"frbj4874-fe4d-4f82-ad86-41554ffv\",\"mounts\":[\"/home\"]},\"/dev/mapper/root_vg-tmp\":{\"kb_size\":\"1998672\",\"kb_used\":\"7916\",\"kb_available\":\"1869516\",\"percent_used\":\"1%\",\"total_inodes\":\"131072\",\"inodes_used\":\"51\",\"inodes_available\":\"131021\",\"inodes_percent_used\":\"1%\",\"fs_type\":\"ext4\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"relatime\",\"seclabel\",\"discard\",\"stripe=16\",\"data=ordered\"],\"uuid\":\"a995fd05-90a8-46a8-a192-0a02f68e476a\",\"mounts\":[\"/tmp\"]},\"/dev/mapper/root_vg-gcis2\":{\"kb_size\":\"20511312\",\"kb_used\":\"3191304\",\"kb_available\":\"16255048\",\"percent_used\":\"17%\",\"total_inodes\":\"1310720\",\"inodes_used\":\"33967\",\"inodes_available\":\"1276753\",\"inodes_percent_used\":\"3%\",\"fs_type\":\"ext4\",\"mount_options\":[\"rw\",\"relatime\",\"seclabel\",\"stripe=16\",\"data=ordered\"],\"uuid\":\"fb4bb78a-7f33-47f1-87a6-dcbe50bc6349\",\"mounts\":[\"/apps/gcis2\"]},\"sysfs\":{\"fs_type\":\"sysfs\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"noexec\",\"relatime\",\"seclabel\"],\"mounts\":[\"/sys\"]},\"proc\":{\"fs_type\":\"proc\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"noexec\",\"relatime\"],\"mounts\":[\"/proc\"]},\"securityfs\":{\"fs_type\":\"securityfs\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"noexec\",\"relatime\"],\"mounts\":[\"/sys/kernel/security\"]},\"devpts\":{\"fs_type\":\"devpts\",\"mount_options\":[\"rw\",\"nosuid\",\"noexec\",\"relatime\",\"seclabel\",\"gid=5\",\"mode=620\",\"ptmxmode=000\"],\"mounts\":[\"/dev/pts\"]},\"cgroup\":{\"fs_type\":\"cgroup\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"noexec\",\"relatime\",\"seclabel\",\"net_prio\",\"net_cls\"],\"mounts\":[\"/sys/fs/cgroup/systemd\",\"/sys/fs/cgroup/perf_event\",\"/sys/fs/cgroup/blkio\",\"/sys/fs/cgroup/freezer\",\"/sys/fs/cgroup/cpu,cpuacct\",\"/sys/fs/cgroup/memory\",\"/sys/fs/cgroup/pids\",\"/sys/fs/cgroup/hugetlb\",\"/sys/fs/cgroup/cpuset\",\"/sys/fs/cgroup/devices\",\"/sys/fs/cgroup/net_cls,net_prio\"]},\"pstore\":{\"fs_type\":\"pstore\",\"mount_options\":[\"rw\",\"nosuid\",\"nodev\",\"noexec\",\"relatime\"],\"mounts\":[\"/sys/fs/pstore\"]},\"configfs\":{\"fs_type\":\"configfs\",\"mount_options\":[\"rw\",\"relatime\"],\"mounts\":[\"/sys/kernel/config\"]},\"selinuxfs\":{\"fs_type\":\"selinuxfs\",\"mount_options\":[\"rw\",\"relatime\"],\"mounts\":[\"/sys/fs/selinux\"]},\"systemd-1\":{\"fs_type\":\"autofs\",\"mount_options\":[\"rw\",\"relatime\",\"fd=30\",\"pgrp=1\",\"timeout=0\",\"minproto=5\",\"maxproto=5\",\"direct\",\"pipe_ino=16127\"],\"mounts\":[\"/proc/sys/fs/binfmt_misc\"]},\"debugfs\":{\"fs_type\":\"debugfs\",\"mount_options\":[\"rw\",\"relatime\"],\"mounts\":[\"/sys/kernel/debug\"]},\"hugetlbfs\":{\"fs_type\":\"hugetlbfs\",\"mount_options\":[\"rw\",\"relatime\",\"seclabel\"],\"mounts\":[\"/dev/hugepages\"]},\"mqueue\":{\"fs_type\":\"mqueue\",\"mount_options\":[\"rw\",\"relatime\",\"seclabel\"],\"mounts\":[\"/dev/mqueue\"]},\"binfmt_misc\":{\"fs_type\":\"binfmt_misc\",\"mount_options\":[\"rw\",\"relatime\"],\"mounts\":[\"/proc/sys/fs/binfmt_misc\"]},\"fusectl\":{\"fs_type\":\"fusectl\",\"mount_options\":[\"rw\",\"relatime\"],\"mounts\":[\"/sys/fs/fuse/connections\"]},\"/dev/fd0\":{},\"/dev/sda\":{},\"/dev/sda2\":{\"fs_type\":\"LVM2_member\",\"uuid\":\"zrtcBv-4y6D-LG2a-Wt6M-h18d-K0BQ-zyl11h\"},\"/dev/mapper/root_vg-swap\":{\"fs_type\":\"swap\",\"uuid\":\"vfbvejgbg5456454-93a1-4a67-b33f-gtrbbbn\"},\"/dev/mapper/root_vg-pool0_tmeta\":{},\"/dev/mapper/root_vg-pool0-tpool\":{},\"/dev/mapper/root_vg-pool0\":{},\"/dev/mapper/root_vg-pool0_tdata\":{},\"rootfs\":{\"fs_type\":\"rootfs\",\"mount_options\":[\"rw\"],\"mounts\":[\"/\"]}}"
I know this seems a bit messy; any help is welcome
looking at the json provided in the question, it seems that your query does not use the right values for the key that you specify -- namely, hostname -- you specified hostname:HostName1 where it feels it needs to be hostname:MyHostName1.
also, you can drop the node argument from the search command and specify it in the query, such as node:my.node.name.

Filter on nested array with jmespath (using az cli)

I would like to know if a value in a TXT record is already available in the DNS:
az network dns record-set txt show -g myresourcegroup -z 'mydomain' -n 'mytxtvalues'
This is the part of the json result where it's all about:
"txtRecords": [
{
"value": [
"abc"
]
},
{
"value": [
"def"
]
}
]
I tried many queries. These are 2 which I expected to work.
az network dns record-set txt show -g myresourcegroup -z 'mydomain.com' -n 'mytxtvalues' --query txtRecords[?value[?starts_with(#, 'abc')]]
az network dns record-set txt show -g myresourcegroup -z 'mydomain.com' -n 'mytxtvalues' --query txtRecords[*].value[?starts_with(#, 'abc')]]
The result is:
At line:1 char:123
+ ... 'mytxtvalues' --query txtRecords[?value[?starts_with(#, 'abc') ...
+ ~ Unrecognized token in source text. At line:1 char:124
+ ... 'mytxtvalues' --query txtRecords[?value[?starts_with(#, 'abc')] ...
+ ~ Missing argument in parameter list.
It looks like the # used to filter the array is not recognized. But I don't know how to query otherwise.
What would be a correct query to know if value abc is already in the list?
You need to use the command like this:
az network dns record-set txt show -g myresourcegroup -z 'mydomain.com' -n 'mytxtvalues' --query "txtRecords[*].value[?starts_with(#, 'abc')]"
And if you just need to output the string you can append the parameter -o tsv. Hope this will help you.

Recognising timestamps in Kibana and ElasticSearch

I'm new to ElasticSearch and Kibana and am having trouble getting Kibana to recognise my timestamps.
I have a JSON file with lots of data that I wish to insert into Elasticsearch using Curl. Here is an example of one of the JSON entries.
{"index":{"_id":"63"}}
{"account_number":63,"firstname":"Hughes","lastname":"Owens", "email":"hughesowens#valpreal.com", "_timestamp":"2013-07-05T08:49:30.123"}
I have tried to create an index in Elasticsearch using the command:
curl -XPUT 'http://localhost:9200/test/'
I have then tried to set up an appropriate mapping for the timestamp:
curl -XPUT 'http://localhost:9200/test/container/_mapping' -d'
{
"container" : {
"_timestamp" : {
"_timestamp" : {"enabled: true, "type":"date", "format": "date_hour_minute_second_fraction", "store":true}
}
}
}'
// format of timestamp from http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-date-format.html
I then have tried to bulk insert my data:
curl -XPOST 'localhost:9200/test/container/_bulk?pretty' --data-binary #myfile.json
All of these commands run without fault however when the data is viewed in Kibana the _timestamp field is not being recognised. Sorting via the timestamp does not work and trying to filter the data using different periods does not work. Any ideas on why this problem is occuring is appricieated.
Managed to solve the problem. So for anyone else having this problem:
The format we had our date saved in was incorrect, needed to be :
"_timestamp":"2013-07-05 08:49:30.123"
then our mapping needed to be:
curl -XPUT 'http://localhost:9200/test/container/_mapping' -d'
{
"container" : {
"_timestamp" : {"enabled": true, "type":"date", "format": "yyyy-MM-dd HH:mm:ss.SSS", "store":true, "path" : "_timestamp"}
}
}'
Hope this helps someone.
There is no need to make and ISO8601 date in case you have an epoch timestamp. To make Kibana recognize the field as date is has to be a date field though.
Please note that you have to set the field as date type BEFORE you input any data into the /index/type. Otherwise it will be stored as long and unchangeable.
Simple example that can be pasted into the marvel/sense plugin:
# Make sure the index isn't there
DELETE /logger
# Create the index
PUT /logger
# Add the mapping of properties to the document type `mem`
PUT /logger/_mapping/mem
{
"mem": {
"properties": {
"timestamp": {
"type": "date"
},
"free": {
"type": "long"
}
}
}
}
# Inspect the newly created mapping
GET /logger/_mapping/mem
Run each of these commands in serie.
Generate free mem logs
Here is a simple script that echo to your terminal and logs to your local elasticsearch:
while (( 1==1 )); do memfree=`free -b|tail -n 1|tr -s ' ' ' '|cut -d ' ' -f4`; echo $load; curl -XPOST "localhost:9200/logger/mem" -d "{ \"timestamp\": `date +%s%3N`, \"free\": $memfree }"; sleep 1; done
Inspect data in elastic search
Paste this in your marvel/sense
GET /logger/mem/_search
Now you can move to Kibana and do some graphs. Kibana will autodetect your date field.
This solution works for older ES <2.4
For the newer version of ES you can either use the "date" field along with the following parameters:
https://www.elastic.co/guide/en/elasticsearch/reference/current/date.html

How to inject environment variables in Varnish configuration

I have 2 environments variables :
echo $FRONT1_PORT_8080_TCP_ADDR # 172.17.1.80
echo $FRONT2_PORT_8081_TCP_ADDR # 172.17.1.77
I want to inject them in a my default.vcl like :
backend front1 {
.host = $FRONT1_PORT_8080_TCP_ADDR;
}
But I got an syntax error on the $ char.
I've also tried with user variables but I can't define them outside vcl_recv.
How can I retrieve my 2 values in the VCL ?
I've managed to parse my vcl
backend front1 {
.host = ${FRONT1_PORT_8080_TCP_ADDR};
}
With a script:
envs=`printenv`
for env in $envs
do
IFS== read name value <<< "$env"
sed -i "s|\${${name}}|${value}|g" /etc/varnish/default.vcl
done
Now you can use the VMOD Varnish Standard Module (std) to get environment variables in the VCL, for example:
set req.backend_hint = app.backend(std.getenv("VARNISH_BACKEND_HOSTNAME"));
See documentation: https://varnish-cache.org/docs/trunk/reference/vmod_std.html#std-getenv
Note: it doesn't work for backend configuration, but could work elsewhere. Apparently backends are expecting constant strings and if you try, you'll get Expected CSTR got 'std.fileread'.
You can use the fileread function of the std module, and create a file for each of your environment variables.
before running varnishd, you can run:
mkdir -p /env; \
env | while read envline; do \
k=${envline%%=*}; \
v=${envline#*=}; \
echo -n "$v" >"/env/$k"; \
done
And then, within your varnish configuration:
import std;
...
backend front1 {
.host = std.fileread("/env/FRONT1_PORT_8080_TCP_ADDR");
.port = std.fileread("/env/FRONT1_PORT_8080_TCP_PORT");
}
I haven't tested it yet. Also, I don't know if giving a string to the port configuration of the backend would work. In that case, converting to an integer should work:
.port = std.integer(std.fileread("/env/FRONT1_PORT_8080_TCP_PORT"), 0);
You can use use echo to eval strings.
Usually you can do something like:
VAR=test # Define variables
echo "my $VAR string" # Eval string
But, If you have the text in a file, you can use "eval" to have the same behaviour:
VAR=test # Define variables
eval echo $(cat file.vcl) # Eval string from the given file
Sounds like a job for envsubst.
Just use standard env var syntax in your config $MY_VAR and ...
envsubst < myconfig.tmpl > myconfig.vcl
You can install with apt get install gettext in Ubuntu.

is there any way to import a json file(contains 100 documents) in elasticsearch server.?

Is there any way to import a JSON file (contains 100 documents) in elasticsearch server? I want to import a big json file into es-server..
As dadoonet already mentioned, the bulk API is probably the way to go. To transform your file for the bulk protocol, you can use jq.
Assuming the file contains just the documents itself:
$ echo '{"foo":"bar"}{"baz":"qux"}' |
jq -c '
{ index: { _index: "myindex", _type: "mytype" } },
. '
{"index":{"_index":"myindex","_type":"mytype"}}
{"foo":"bar"}
{"index":{"_index":"myindex","_type":"mytype"}}
{"baz":"qux"}
And if the file contains the documents in a top level list they have to be unwrapped first:
$ echo '[{"foo":"bar"},{"baz":"qux"}]' |
jq -c '
.[] |
{ index: { _index: "myindex", _type: "mytype" } },
. '
{"index":{"_index":"myindex","_type":"mytype"}}
{"foo":"bar"}
{"index":{"_index":"myindex","_type":"mytype"}}
{"baz":"qux"}
jq's -c flag makes sure that each document is on a line by itself.
If you want to pipe straight to curl, you'll want to use --data-binary #-, and not just -d, otherwise curl will strip the newlines again.
You should use Bulk API. Note that you will need to add a header line before each json document.
$ cat requests
{ "index" : { "_index" : "test", "_type" : "type1", "_id" : "1" } }
{ "field1" : "value1" }
$ curl -s -XPOST localhost:9200/_bulk --data-binary #requests; echo
{"took":7,"items":[{"create":{"_index":"test","_type":"type1","_id":"1","_version":1,"ok":true}}]}
I'm sure someone wants this so I'll make it easy to find.
FYI - This is using Node.js (essentially as a batch script) on the same server as the brand new ES instance. Ran it on 2 files with 4000 items each and it only took about 12 seconds on my shared virtual server. YMMV
var elasticsearch = require('elasticsearch'),
fs = require('fs'),
pubs = JSON.parse(fs.readFileSync(__dirname + '/pubs.json')), // name of my first file to parse
forms = JSON.parse(fs.readFileSync(__dirname + '/forms.json')); // and the second set
var client = new elasticsearch.Client({ // default is fine for me, change as you see fit
host: 'localhost:9200',
log: 'trace'
});
for (var i = 0; i < pubs.length; i++ ) {
client.create({
index: "epubs", // name your index
type: "pub", // describe the data thats getting created
id: i, // increment ID every iteration - I already sorted mine but not a requirement
body: pubs[i] // *** THIS ASSUMES YOUR DATA FILE IS FORMATTED LIKE SO: [{prop: val, prop2: val2}, {prop:...}, {prop:...}] - I converted mine from a CSV so pubs[i] is the current object {prop:..., prop2:...}
}, function(error, response) {
if (error) {
console.error(error);
return;
}
else {
console.log(response); // I don't recommend this but I like having my console flooded with stuff. It looks cool. Like I'm compiling a kernel really fast.
}
});
}
for (var a = 0; a < forms.length; a++ ) { // Same stuff here, just slight changes in type and variables
client.create({
index: "epubs",
type: "form",
id: a,
body: forms[a]
}, function(error, response) {
if (error) {
console.error(error);
return;
}
else {
console.log(response);
}
});
}
Hope I can help more than just myself with this. Not rocket science but may save someone 10 minutes.
Cheers
jq is a lightweight and flexible command-line JSON processor.
Usage:
cat file.json | jq -c '.[] | {"index": {"_index": "bookmarks", "_type": "bookmark", "_id": .id}}, .' | curl -XPOST localhost:9200/_bulk --data-binary #-
We’re taking the file file.json and piping its contents to jq first with the -c flag to construct compact output. Here’s the nugget: We’re taking advantage of the fact that jq can construct not only one but multiple objects per line of input. For each line, we’re creating the control JSON Elasticsearch needs (with the ID from our original object) and creating a second line that is just our original JSON object (.).
At this point we have our JSON formatted the way Elasticsearch’s bulk API expects it, so we just pipe it to curl which POSTs it to Elasticsearch!
Credit goes to Kevin Marsh
Import no, but you can index the documents by using the ES API.
You can use the index api to load each line (using some kind of code to read the file and make the curl calls) or the index bulk api to load them all. Assuming your data file can be formatted to work with it.
Read more here : ES API
A simple shell script would do the trick if you comfortable with shell something like this maybe (not tested):
while read line
do
curl -XPOST 'http://localhost:9200/<indexname>/<typeofdoc>/' -d "$line"
done <myfile.json
Peronally, I would probably use Python either pyes or the elastic-search client.
pyes on github
elastic search python client
Stream2es is also very useful for quickly loading data into es and may have a way to simply stream a file in. (I have not tested a file but have used it to load wikipedia doc for es perf testing)
Stream2es is the easiest way IMO.
e.g. assuming a file "some.json" containing a list of JSON documents, one per line:
curl -O download.elasticsearch.org/stream2es/stream2es; chmod +x stream2es
cat some.json | ./stream2es stdin --target "http://localhost:9200/my_index/my_type
You can use esbulk, a fast and simple bulk indexer:
$ esbulk -index myindex file.ldj
Here's an asciicast showing it loading Project Gutenberg data into Elasticsearch in about 11s.
Disclaimer: I'm the author.
you can use Elasticsearch Gatherer Plugin
The gatherer plugin for Elasticsearch is a framework for scalable data fetching and indexing. Content adapters are implemented in gatherer zip archives which are a special kind of plugins distributable over Elasticsearch nodes. They can receive job requests and execute them in local queues. Job states are maintained in a special index.
This plugin is under development.
Milestone 1 - deploy gatherer zips to nodes
Milestone 2 - job specification and execution
Milestone 3 - porting JDBC river to JDBC gatherer
Milestone 4 - gatherer job distribution by load/queue length/node name, cron jobs
Milestone 5 - more gatherers, more content adapters
reference https://github.com/jprante/elasticsearch-gatherer
One way is to create a bash script that does a bulk insert:
curl -XPOST http://127.0.0.1:9200/myindexname/type/_bulk?pretty=true --data-binary #myjsonfile.json
After you run the insert, run this command to get the count:
curl http://127.0.0.1:9200/myindexname/type/_count