Vagrant - JSON Attribute Syntax/Use - json

I am currently using Vagrant to install a glassfish server via chef_solo cookbook. Everything installs correctly and I can access the server, but it requires that I enable secure_admin to access the server remotely from my host machine.
The problem lies in that I cannot seem to find, or understand the JSON syntax for Vagrant to properly modify the attribute for secure_admin to enable.
I am using this cookbook: https://github.com/realityforge/chef-glassfish
In the instructions it explains to modify such attributes to enter code such as this:
# Create a basic domain that logs to a central graylog server
glassfish_domain "my_domain" do
port 80
admin_port 8103
extra_libraries ['https://github.com/downloads/realityforge/gelf4j/gelf4j-0.9-all.jar']
logging_properties {
"handlers" => "java.util.logging.ConsoleHandler, gelf4j.logging.GelfHandler",
".level" => "INFO",
"java.util.logging.ConsoleHandler.level" => "INFO",
"gelf4j.logging.GelfHandler.level" => "ALL",
"gelf4j.logging.GelfHandler.host" => 'graylog.example.org',
"gelf4j.logging.GelfHandler.defaultFields" => '{"environment": "' + node.chef_environment + '", "facility": "MyDomain"}'
}
end
However, if I wish to modify features such as the port or domain name, I have to edit those attributes with this syntax (Whats in my vagrantfile already):
chef.json = {
"glassfish" => {
"base_dir" => "/usr/local/glassfish",
"domains_dir" => "/usr/local/glassfish/glassfish/domains",
"domains" => {
"domain1" => {
"config" => {
"domain_name" => "domain1",
"admin_port" => 4848,
"username" => "root",
"password" => "admin",
}
}
}
}
}
This code makes sense to me as I see within the recipe "attribute_driven_domain" in this cookbook, that the open statements are described as such. Meaning to edit the minimum memory of the domain, I would have type:
"glassfish" => {
"domains" => {
"domain1" => {
"config" => {
"min_memory" => 512
}
}
}
}
Which this ^ , corresponds to:
['glassfish']
['domains']
['config']
['min_memory']
....Found in this section of the recipe:
gf_sort(node['glassfish']['domains']).each_pair do |domain_key, definition|
domain_key = domain_key.to_s
Chef::Log.info "Defining GlassFish Domain #{domain_key}"
admin_port = definition['config']['admin_port']
username = definition['config']['username']
secure = definition['config']['secure']
password_file = username ? "#{node['glassfish']['domains_dir']}/#{domain_key}_admin_passwd" : nil
system_username = definition['config']['system_user']
system_group = definition['config']['system_group']
if (definition['config']['port'] && definition['config']['port'] < 1024) || (admin_port && admin_port < 1024)
include_recipe 'authbind'
end
glassfish_domain domain_key do
min_memory definition['config']['min_memory'] if definition['config']['min_memory']
max_memory definition['config']['max_memory'] if definition['config']['max_memory']
max_perm_size definition['config']['max_perm_size'] if definition['config']['max_perm_size']
max_stack_size definition['config']['max_stack_size'] if definition['config']['max_stack_size']
port definition['config']['port'] if definition['config']['port']
However, at the part that defines the secure admin, I can't see a distinct place that would indicate where it is supposed to be placed in the chef.json block. Found in this section:
glassfish_secure_admin "#{domain_key}: secure_admin" do
domain_name domain_key
admin_port admin_port if admin_port
username username if username
password_file password_file if password_file
secure secure if secure
system_user system_username if system_username
system_group system_group if system_group
action ('true' == definition['config']['remote_access'].to_s) ? :enable : :disable
end
I can't seem to figure out where the secure_admin attribute is suppose to placed in the chef.json block within my vagrantfile. I've tried placing it different spots, such as under the glassfish level, under the domains level, under the config.
I really don't know what I am exactly suppose to put, or where.
I have been using variants of this:
"secure_admin" => {
"domain_name" => "domain1"
"action" => :enable
}
or like this if it was under domain1 but above config:
"secure_admin" => {
"action" => :enable
}
Most of the time it doesn't give any feedback of change or error, sometimes if its put in certain spots it fails because it tries to read it as a separate domain, but other than that not much.
Is the syntax that I'm currently using to modify attributes incorrect? I'm pretty fresh with this stuff, so I don't really know. Sorry for the terribly long post.

It looks like to enable remote access you would set the node attribute for domain['config']['remote_access'] to true. This is just a guess based on the ternary operator. So in your original example:
"glassfish" => {
"base_dir" => "/usr/local/glassfish",
"domains_dir" => "/usr/local/glassfish/glassfish/domains",
"domains" => {
"domain1" => {
"config" => {
"domain_name" => "domain1",
"admin_port" => 4848,
"username" => "root",
"password" => "admin",
"remote_access" => true
}
}
}
}

Related

Create file/folder on Shared drive via service account

I'm using a G-Suite Google Drive and Service account to connect to a Shared Drive. I'm using the PHP API library. I have previously made this work with the "regular" Google drive but we are moving to shared drive (aka Team Drive)
I can use files->listFiles() to get the file list, using:
$options = array('pageSize' => 100,
'corpora' => 'drive',
'supportsTeamDrives' => true,
'includeTeamDriveItems' => true,
'teamDriveId' => $sharedID,
'fields' => "nextPageToken, files(id, name)",
'q' => "'" . $pParentFolderID . "' in parents "
. " and name = \"" . $pFolderName . "\" "
. " and mimeType = 'application/vnd.google-apps.folder' "
. " and not trashed"
);
try{
$files_list = $this->drive_service->files->listFiles($optParams);
This works and gives me the ID for the specific file/folder I ask for.
However, attempting to create a folder in the top folder on the drive results in 404 error, saying that the file/folder is not found, even though I have just retrieved the correct ID from it.
$param =array( 'supportsAllDrives' => true,
'supportsTeamDrives' => true,
'teamDriveId' => $SharedID,
'parents' => array($pParentFolderID),
'name' => $pFolderName,
'mimeType' => 'application/vnd.google-apps.folder');
$fileMetadata = new Google_Service_Drive_DriveFile($param);
try {
$folderObj = $this->drive_service->files->create($fileMetadata, array('fields' => 'id'));
I have tried all kinds of permutations of this, but always get the 404. I have verified many times that the file ID for the parent is correct. I have tried using teamdriveid, driveid and various other options.
This is the data that I send and the data I get back from Google:
array(6) {
["supportsAllDrives"]=>
bool(true)
["supportsTeamDrives"]=>
bool(true)
["teamDriveId"]=>
string(19) "0ALay-iFeEOX6Uk9PVA"
["parents"]=>
array(1) {
[0]=>
string(33) "1oI72fkTb-AObYWBEOI-ykS5eDYg3L2ar"
}
["name"]=>
string(9) "FieldTest"
["mimeType"]=>
string(34) "application/vnd.google-apps.folder"
}
Trying to create:
Error from Google:array(1) {
["error"]=>
array(3) {
["errors"]=>
array(1) {
[0]=>
array(5) {
["domain"]=>
string(6) "global"
["reason"]=>
string(8) "notFound"
["message"]=>
string(50) "File not found: 1oI72fkTb-AObYWBEOI-ykS5eDYg3L2ar."
["locationType"]=>
string(9) "parameter"
["location"]=>
string(6) "fileId"
}
}
["code"]=>
int(404)
["message"]=>
string(50) "File not found: 1oI72fkTb-AObYWBEOI-ykS5eDYg3L2ar."
}
}
This process works with the "regular" drive, but fails on the Shared Drive.
Anyone have an idea what is missing?
You are providing supportsAllDrives in the request body, when you should be providing it as a query parameter, as specified here. Also, supportsTeamDrives and teamDriveId are not needed (and they are deprecated). So you should change this:
$param =array( 'supportsAllDrives' => true,
'supportsTeamDrives' => true,
'teamDriveId' => $SharedID,
'parents' => array($pParentFolderID),
'name' => $pFolderName,
'mimeType' => 'application/vnd.google-apps.folder');
To this:
$param =array( 'parents' => array($pParentFolderID),
'name' => $pFolderName,
'mimeType' => 'application/vnd.google-apps.folder');
Also, as I said, you should provide supportsAllDrives as a parameter when making the call. So you should change this:
$folderObj = $this->drive_service->files->create($fileMetadata, array('fields' => 'id'));
To this:
$folderObj = $this->drive_service->files->create($fileMetadata, array('fields' => 'id', 'supportsAllDrives' => true));
Because you are providing supportsAllDrives in the request body and not as a parameter, you cannot access shared drives.
I hope this is of any help.

How to migrate Mysql data to elasticsearch using logstash

I need a brief explanation of how can I convert MySQL data to Elastic Search using logstash.
can anyone explain the step by step process about this
This is a broad question, I don't know how much you familiar with MySQL and ES. Let's say you have a table user. you may just simply dump it as csv and load it at your ES will be good. but if you have a dynamic data, like the MySQL just like a pipeline, you need to write a Script to do those stuff. anyway you can check the below link to build your basic knowledge before you ask How.
How to dump mysql?
How to load data to ES
Also, since you probably want to know how to covert your CSV to json file, which is the best suite for ES to understand.
How to covert CSV to JSON
You can do it using the jdbc input plugin for logstash.
Here is a config example.
Let me provide you with a high level instruction set.
Install Logstash, and Elasticsearch.
In Logstash bin folder copy jar ojdbc7.jar.
For logstash, create a config file ex: config.yml
#
input {
# Get the data from database, configure fields to get data incrementally
jdbc {
jdbc_driver_library => "./ojdbc7.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_connection_string => "jdbc:oracle:thin:#db:1521:instance"
jdbc_user => "user"
jdbc_password => "pwd"
id => "some_id"
jdbc_validate_connection => true
jdbc_validation_timeout => 1800
connection_retry_attempts => 10
connection_retry_attempts_wait_time => 10
#fetch the db logs using logid
statement => "select * from customer.table where logid > :sql_last_value order by logid asc"
#limit how many results are pre-fetched at a time from the cursor into the client’s cache before retrieving more results from the result-set
jdbc_fetch_size => 500
jdbc_default_timezone => "America/New_York"
use_column_value => true
tracking_column => "logid"
tracking_column_type => "numeric"
record_last_run => true
schedule => "*/2 * * * *"
type => "log.customer.table"
add_field => {"source" => "customer.table"}
add_field => {"tags" => "customer.table" }
add_field => {"logLevel" => "ERROR" }
last_run_metadata_path => "last_run_metadata_path_table.txt"
}
}
# Massage the data to store in index
filter {
if [type] == 'log.customer.table' {
#assign values from db column to custom fields of index
ruby{
code => "event.set( 'errorid', event.get('ssoerrorid') );
event.set( 'msg', event.get('errormessage') );
event.set( 'logTimeStamp', event.get('date_created'));
event.set( '#timestamp', event.get('date_created'));
"
}
#remove the db columns that were mapped to custom fields of index
mutate {
remove_field => ["ssoerrorid","errormessage","date_created" ]
}
}#end of [type] == 'log.customer.table'
} #end of filter
# Insert into index
output {
if [type] == 'log.customer.table' {
amazon_es {
hosts => ["vpc-xxx-es-yyyyyyyyyyyy.us-east-1.es.amazonaws.com"]
region => "us-east-1"
aws_access_key_id => '<access key>'
aws_secret_access_key => '<secret password>'
index => "production-logs-table-%{+YYYY.MM.dd}"
}
}
}
Go to bin, Run as
logstash -f config.yml

Logstash - How to trigger Celery tasks through RabbitMQ

Can someone explain to me how I can trigger Celery tasks through Logstash?
Is it possible?
If I try to do that in PHP through the 'php-amqplib' library it works fine: (without using Logstash)
$connection = new AMQPStreamConnection(
'rabbitmq.local',
5672,
'guest',
'guest'
);
$channel = $connection->channel();
$channel->queue_declare(
'celery',
false,
true,
false,
false
);
$taskId = rand(1000, 10000);
$props = array(
'content_type' => 'application/json',
'content_encoding' => 'utf-8',
);
$body = array(
'task' => 'process_next_task',
'lang' => 'py',
'args' => array('ktest' => 'vtest'),
'kwargs' => array('ktest' => 'vtest'),
'origin' => '#'.'mytest',
'id' => $taskId,
);
$msg = new AMQPMessage(json_encode($body), $props);
$channel->basic_publish($msg, 'celery', 'celery');
According to the Celery docs:
http://docs.celeryproject.org/en/latest/internals/protocol.html
I'm trying to send the request in the json format, this is my Logstash filter:
ruby
{
remove_field => ['headers', '#timestamp', '#version', 'host', 'type']
code => "
event.set('properties',
{
:content_type => 'application/json',
:content_encoding => 'utf-8'
})
"
}
And Celery answer is:
[2017-05-05 14:35:09,090: WARNING/MainProcess] Received and deleted unknown message. Wrong destination?!
{content_type:None content_encoding:None delivery_info:{'exchange': 'celery', 'routing_key': 'celery', 'redelivered': False, 'consumer_tag': 'None4', 'delivery_tag': 66} headers={}}
Basically, Celery is not able to decode my message format or better... I'm not able to set the request in the JSON format :)
It's driving me crazy, thank you in advance for any clues :)
Forgot it, this is my output plugin in Logstash
rabbitmq
{
key => "celery"
exchange => "celery"
exchange_type => "direct"
user => "${RABBITMQ_USER}"
password => "${RABBITMQ_PASSWORD}"
host => "${RABBITMQ_HOST}"
port => "${RABBITMQ_PORT}"
durable => true
persistent => true
codec => json
}
From the information provided in this question, you can't.
When you're playing with the event in the ruby filter, you're actually playing with what will be put in the body of the message, while you'd like to set the rabbitmq headers and properties of your message.
Till that functionality has been tackled, I do not think you'll be able to achieve it unless of course you implement it yourself. After all, the plugin is available on github.
As Olivier said, right now is not possible but I've created a pull request to the official project.
https://github.com/logstash-plugins/logstash-output-rabbitmq/pull/59
If you are looking for a working version take a look to my clone:
https://github.com/useless-stuff/logstash-output-rabbitmq
You should be seriously scared about that code :)
I'm completely far away to be a Ruby developer
But it works :)

Grok match json field and value

I'm using koajs with bunyan to save error logs to my server then I use filebeat to have them shipped to my logstash application.
My error logs are being forwarded correctly however I would now like to create a filter which will add a tag to specific logs.
{"name":"myapp","hostname":"sensu-node-dev","pid":227,"level":50,"err":{"message":"Cannot find module 'lol'","name":"Error","stack":"Error: Cannot find module 'lol'\n at Function.Module._resolveFilename (module.js:339:15)\n at Function.Module._load (module.js:290:25)\n at Module.require (module.js:367:17)\n at require (internal/module.js:16:19)\n at Object.<anonymous> (/srv/www/dev.site/app.js:27:6)\n at next (native)\n at Object.<anonymous> (/srv/www/dev.site/node_modules/koa-compose/index.js:29:5)\n at next (native)\n at onFulfilled (/srv/www/dev.site/node_modules/co/index.js:65:19)\n at /srv/www/dev.site/node_modules/co/index.js:54:5","code":"MODULE_NOT_FOUND"},"msg":"Cannot find module 'lol'","time":"2016-02-24T22:04:26.492Z","v":0}
Now the interesting part in that specific log is "err":{...} and the "name":"Error" bits. For simplicity reasons I would just like to create a filter which detects "name":"Error" in the log (if it exists) and then apply a tag add_tag => ["error"] to the log.
Here is my /etc/logstash/conf.d/logstash.conf file:
input {
beats {
port => 5044
type => "logs"
}
}
filter {
grok {
type => "log"
pattern => "???" // <--- have no idea what to do here
add_tag => ["error"]
}
}
output {
elasticsearch {
hosts => "localhost:9200"
sniffing => true
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
http {
http_method => "post"
url => "<MY_URL>"
format => "message"
message => "{"text":"dis is workinz, you has error"}"
tags => ["error"]
}
}
I tried the following:
pattern => ""name":"Error""
But got the following error:
Error: Expected one of #, {, } at line 9, column 31 (byte 107) after filter {
grok {
match => { "message" => ""
You may be interested in the '--configtest' flag which you can
use to validate logstash's configuration before you choose
to restart a running system.
There is no simple example of this specific type of matching anywhere.
Bonus: Also how does one escape in logstash, I couldn't find anything on the subject?
If you only want to see if a string exists in your message, try this:
if [message] =~ /"name":"Error"/ {
mutate {
add_tag { ... }
}
}
If you really want to grok the input into fields, check out the json codec or filter instead.

logstash grok remove fqdn from hostname and igone ip

my logstash input receive jsons that look like that:
{"src":"comp1.google.com","dst":"comp2.yehoo.com","next_hope":"router4.ccc.com"}
and also the json can look like this ( some keys can hold ip instead of host name:
{"src":"comp1.google.com","dst":"192.168.1.20","next_hope":"router4.ccc.com"}
i want to remove the fqdn and if its contain ip (ignore it)to leave it with the ip
i tried this but its not working
filter {
grok {
match => {
"src" => "%{IP:src}"
"src" => "%{WORD:src}"
}
overwrite => ["src"]
break_on_match => true
}
grok {
match => {
"dst" => "%{IP:dst}"
"dst" => "%{WORD:dst}"
}
overwrite => ["dst"]
break_on_match => true
}
grok {
match => {
"next_hope" => "%{IP:next_hope}"
"next_hope" => "%{WORD:next_hope}"
}
overwrite => ["next_hope"]
break_on_match => true
}
}
this filter working well on the first json.
but this not working for the second json ( the dst key)
i get this result:
{
"src" => "comp1",
"dst" => "192",
"next_hope" => "router4"
}
i want dst field will remain with the original value because he has ip address and not a host name.
the result i expect is:
{
"src" => "comp1",
"dst" => "192.168.1.20",
"next_hope" => "router4"
}
any idea?
also is there any possibility to do all this trick in 1 grok filter?
Your problem is that the regex for WORD matches a number. The easiest thing to do would be to protect the grok's so that they don't run for IP addresses:
if [src] !~ /\d+\.\d+\.\d+\.\d+/ {
grok {
match => {
"src" => "%{WORD:src}"
}
overwrite => ["src"]
}
}
And repeat that for the other fields.