What should you name filename parts? - language-agnostic

If I've got a file:
/home/dean/my-file.txt
What would you name the following parts:
1) /home/dean/
2) my-file.txt
3) my-file
4) /home/dean/my-file.txt
I've encountered (and written) much to much code where any of the above might be named 'file' or 'filename' or 'filepath' or 'filenameAndPath', etc.

PHP's pathinfo function names them as follows:
[dirname] => /home/dean/
[basename] => my-file.txt
[extension] => txt
[filename] => my-file

Related

LogicException: You must define a binary prior to conversion

snappy.php get this code
'pdf' => [
'enabled' => true,
'binary' => env('"C:\Program Files\wkhtmltox\bin\wkhtmltopdf-amd64.exe"'),
'timeout' => false,
'options' => [],
'env' => [],
],
and when i try this code for test html to pdf in web.php
Route::get('/', function () {
$snappy = App::make('snappy.pdf');
$snappy->generateFromHtml('<h1>hello</h1>','exemple.pdf');
return view('welcome');
});
The problem is the executable path.
If Snappy don't find the executable, he get this message.
Please mark the file as executable (chmod +x on linux)
You can see this documentation
The problem is your binary path, if you are using Windows you must use the path like this:
'binary' => '"C:\Program Files\wkhtmltox\bin\wkhtmltopdf-amd64.exe"'
And make sure the path to the .exe is correct.

Keys of Dict are not enconded when I read from txt - Julia

I read a .txt file (it contains a Dict) but the keys of Dict are with errors. In the original file the names are right (ex: the file has "P. Cárdenas" but I got "P. C\xe1rdenas")
>> f = open("dict.txt", "r")
>> dict_maestro = JSON.parse(f)
>>Dict{String,Any} with 5 entries:
"P. C\xe1rdenas" => Dict{String,Any}("dist_tm"=>Any[Any[0.248, 0.074, 0.…
"S. L\xf3pez" => Dict{String,Any}("dist_tm"=>Any[Any[0.096, 0.082, 0.…
"S. Cabrera" => Dict{String,Any}("dist_tm"=>Any[Any[0.341, 0.094, 0.…
"C. Mu\xf1oz" => Dict{String,Any}("dist_tm"=>Any[Any[0.246, 0.073, 0.…
"R. Bugue\xf1o" => Dict{String,Any}("dist_tm"=>Any[Any[0.261, 0.068, 0.…
How can I get the right names?
If I am not mistaken you are reading the file as bytes, not as UTF strings. According to the answer to the linked duplicate question you should first convert the contents of the file to appropriately encoded strings and then parse it as JSON. This would go roughly the following way:
s = open("dict.txt", "r") do f
utf16(readbytes(f))
end
dict_maestro = JSON.parse(s)
You can use utf8 instead of utf16 if this is the encoding you have in your file.

How to migrate Mysql data to elasticsearch using logstash

I need a brief explanation of how can I convert MySQL data to Elastic Search using logstash.
can anyone explain the step by step process about this
This is a broad question, I don't know how much you familiar with MySQL and ES. Let's say you have a table user. you may just simply dump it as csv and load it at your ES will be good. but if you have a dynamic data, like the MySQL just like a pipeline, you need to write a Script to do those stuff. anyway you can check the below link to build your basic knowledge before you ask How.
How to dump mysql?
How to load data to ES
Also, since you probably want to know how to covert your CSV to json file, which is the best suite for ES to understand.
How to covert CSV to JSON
You can do it using the jdbc input plugin for logstash.
Here is a config example.
Let me provide you with a high level instruction set.
Install Logstash, and Elasticsearch.
In Logstash bin folder copy jar ojdbc7.jar.
For logstash, create a config file ex: config.yml
#
input {
# Get the data from database, configure fields to get data incrementally
jdbc {
jdbc_driver_library => "./ojdbc7.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_connection_string => "jdbc:oracle:thin:#db:1521:instance"
jdbc_user => "user"
jdbc_password => "pwd"
id => "some_id"
jdbc_validate_connection => true
jdbc_validation_timeout => 1800
connection_retry_attempts => 10
connection_retry_attempts_wait_time => 10
#fetch the db logs using logid
statement => "select * from customer.table where logid > :sql_last_value order by logid asc"
#limit how many results are pre-fetched at a time from the cursor into the client’s cache before retrieving more results from the result-set
jdbc_fetch_size => 500
jdbc_default_timezone => "America/New_York"
use_column_value => true
tracking_column => "logid"
tracking_column_type => "numeric"
record_last_run => true
schedule => "*/2 * * * *"
type => "log.customer.table"
add_field => {"source" => "customer.table"}
add_field => {"tags" => "customer.table" }
add_field => {"logLevel" => "ERROR" }
last_run_metadata_path => "last_run_metadata_path_table.txt"
}
}
# Massage the data to store in index
filter {
if [type] == 'log.customer.table' {
#assign values from db column to custom fields of index
ruby{
code => "event.set( 'errorid', event.get('ssoerrorid') );
event.set( 'msg', event.get('errormessage') );
event.set( 'logTimeStamp', event.get('date_created'));
event.set( '#timestamp', event.get('date_created'));
"
}
#remove the db columns that were mapped to custom fields of index
mutate {
remove_field => ["ssoerrorid","errormessage","date_created" ]
}
}#end of [type] == 'log.customer.table'
} #end of filter
# Insert into index
output {
if [type] == 'log.customer.table' {
amazon_es {
hosts => ["vpc-xxx-es-yyyyyyyyyyyy.us-east-1.es.amazonaws.com"]
region => "us-east-1"
aws_access_key_id => '<access key>'
aws_secret_access_key => '<secret password>'
index => "production-logs-table-%{+YYYY.MM.dd}"
}
}
}
Go to bin, Run as
logstash -f config.yml

import csv into elasticsearch

I'm doing "elastic search getting started" tutorial. Unfortunatelly this tutorial doesn't cover first step which is importing csv database into elasticsearch.
I googled to find solution but it doesn't work unfortunatelly. Here is what I want to achieve and what I have:
I have a file with data which I want to import (simplified)
id,title
10,Homer's Night Out
12,Krusty Gets Busted
I would like to import it using logstash. After research over the internet I end up with following config:
input {
file {
path => ["simpsons_episodes.csv"]
start_position => "beginning"
}
}
filter {
csv {
columns => [
"id",
"title"
]
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
action => "index"
hosts => ["127.0.0.1:9200"]
index => "simpsons"
document_type => "episode"
workers => 1
}
}
I have a trouble with specifying document type so once data is imported and I navigate to http://localhost:9200/simpsons/episode/10 I expect to see result with episode 10.
Good job, you're almost there, you're only missing the document ID. You need to modify your elasticsearch output like this:
elasticsearch {
action => "index"
hosts => ["127.0.0.1:9200"]
index => "simpsons"
document_type => "episode"
document_id => "%{id}" <---- add this line
workers => 1
}
After this you'll be able to query episode with id 10
GET http://localhost:9200/simpsons/episode/10
I'm the author of moshe/elasticsearch_loader
I wrote ESL for this exact problem.
You can download it with pip:
pip install elasticsearch-loader
And then you will be able to load csv files into elasticsearch by issuing:
elasticsearch_loader --index incidents --type incident csv file1.csv
Additionally, you can use custom id file by adding --id-field=document_id to the command line

JSON to Hash in Ruby and vice-versa using Files - Parser Error

I am trying to save data from a Hash to a file. I convert it to JSON and dump it into the file.
When I try to parse back from file to hash I get JSON::ParserError
Code to convert Hash to JSON file: (works fine)
user = {:email => "cumber#cc.cc", :passwrd => "hardPASSw0r|)"}
student_file = File.open("students.txt", "a+") do |f|
f.write JSON.dump(user)
end
After adding a few values one by one to the file it looks something like this:
{"email":"test1#gmail.com","passwrd":"qwert123"}{"email":"test3#gmail.com","passwrd":"qwert12345"}{"email":"cumber#cc.cc","passwrd":"hardPASSw0r|)"}
I tried the following code to convert back to Hash but it doesn't work:
file = File.read('students.txt')
data_hash = JSON.parse(file)
I get
System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/json/common.rb:155:in `parse': 757: unexpected token at '{"email":"test3#gmail.com","passwrd":"qwert12345"}{"email":"cumber#cc.cc","passwrd":"hardPASSw0r|)"}' (JSON::ParserError)
from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/json/common.rb:155:in `parse'
from hash_json.rb:25:in `<main>'
My goal is to be able to add and remove values from the file.
How do I fix this, where was my mistake? Thank you.
This should work:
https://repl.it/EXGl/0
# as adviced by #EricDuminil, on some envs you need to include 'json' too
require 'json'
user = {:email => "cumber#cc.cc", :passwrd => "hardPASSw0r|)"}
student_file = File.open("students.txt", "w") do |f|
f.write(user.to_json)
end
file = File.read('students.txt')
puts "saved content is: #{JSON.parse(file)}"
p.s. hope that this is only an example, never store passwords in plain-text! NEVER ;-)