I'm looking for a solution to speed up MySQL data but after trying with the indexing I didn't find the solution to speed up the MySQL count(*) data. so I use elasticsearch for better performance. I have around 3 million records in MySQL and I want to import all record with join as well so I use PHP elasticsearch plugin to import the data but it will also take very long time. then I use logstash and create a script for reading data but it also not work. I have run my system whole night then logstash will insert only 600 000 records. so what is the solution for this? do I need to improve MySQL performance to import into elasticsearch or any other way to import large data into elasticsearch.
check my script also.
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://172.17.0.3:3306/repairs_db"
# The user we wish to execute our statement as
jdbc_user => "root"
jdbc_password => ""
jdbc_page_size => 50000
jdbc_paging_enabled => true
# The path to our downloaded jdbc driver
jdbc_driver_library => "/home/mysql-connector-java-5.1.46/mysql-connector-java-5.1.46.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
# our query
statement => "SELECT r.id,r.brand_id,r.product_brand,r.product_description,r.store_id,r.product_group,r.product_id,r.itm_product_group_desc,r.first_name,r.last_name,r.status,r.damaged,r.is_extended_warranty,r.is_floor_stock,r.is_inhome,r.callcentre,r.is_bsp_case,r.created,r.updated,r.is_sandbox_mode,pro.itm_descriptor,st.name as store_name,rp.name as repairer_name from requests r JOIN products pro ON r.product_id = pro.id JOIN stores st ON r.store_id = st.id JOIN repairers rp ON r.repairer_id = rp.id"
}
}
output {
stdout { codec => json_lines }
elasticsearch {
"hosts" => "172.17.0.3:9200"
"index" => "req-migrate"
"document_type" => "data"
}
}
Please suggest me something to load the data into elasticsearch. Can we use MySQL as well for such a situation?
Related
I need a brief explanation of how can I convert MySQL data to Elastic Search using logstash.
can anyone explain the step by step process about this
This is a broad question, I don't know how much you familiar with MySQL and ES. Let's say you have a table user. you may just simply dump it as csv and load it at your ES will be good. but if you have a dynamic data, like the MySQL just like a pipeline, you need to write a Script to do those stuff. anyway you can check the below link to build your basic knowledge before you ask How.
How to dump mysql?
How to load data to ES
Also, since you probably want to know how to covert your CSV to json file, which is the best suite for ES to understand.
How to covert CSV to JSON
You can do it using the jdbc input plugin for logstash.
Here is a config example.
Let me provide you with a high level instruction set.
Install Logstash, and Elasticsearch.
In Logstash bin folder copy jar ojdbc7.jar.
For logstash, create a config file ex: config.yml
#
input {
# Get the data from database, configure fields to get data incrementally
jdbc {
jdbc_driver_library => "./ojdbc7.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_connection_string => "jdbc:oracle:thin:#db:1521:instance"
jdbc_user => "user"
jdbc_password => "pwd"
id => "some_id"
jdbc_validate_connection => true
jdbc_validation_timeout => 1800
connection_retry_attempts => 10
connection_retry_attempts_wait_time => 10
#fetch the db logs using logid
statement => "select * from customer.table where logid > :sql_last_value order by logid asc"
#limit how many results are pre-fetched at a time from the cursor into the client’s cache before retrieving more results from the result-set
jdbc_fetch_size => 500
jdbc_default_timezone => "America/New_York"
use_column_value => true
tracking_column => "logid"
tracking_column_type => "numeric"
record_last_run => true
schedule => "*/2 * * * *"
type => "log.customer.table"
add_field => {"source" => "customer.table"}
add_field => {"tags" => "customer.table" }
add_field => {"logLevel" => "ERROR" }
last_run_metadata_path => "last_run_metadata_path_table.txt"
}
}
# Massage the data to store in index
filter {
if [type] == 'log.customer.table' {
#assign values from db column to custom fields of index
ruby{
code => "event.set( 'errorid', event.get('ssoerrorid') );
event.set( 'msg', event.get('errormessage') );
event.set( 'logTimeStamp', event.get('date_created'));
event.set( '#timestamp', event.get('date_created'));
"
}
#remove the db columns that were mapped to custom fields of index
mutate {
remove_field => ["ssoerrorid","errormessage","date_created" ]
}
}#end of [type] == 'log.customer.table'
} #end of filter
# Insert into index
output {
if [type] == 'log.customer.table' {
amazon_es {
hosts => ["vpc-xxx-es-yyyyyyyyyyyy.us-east-1.es.amazonaws.com"]
region => "us-east-1"
aws_access_key_id => '<access key>'
aws_secret_access_key => '<secret password>'
index => "production-logs-table-%{+YYYY.MM.dd}"
}
}
}
Go to bin, Run as
logstash -f config.yml
I am working on Elastic Stack with Mysql. everything is working fine like logstash taking data from mysql database and sending it to elasticsearch and when new entries entered in mysql data then to update elasticsearch automatically i am using parameter: Schedule but in this case logstash is checking continuously for new data from it's terminal that is my main concern.
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/testdb"
# The user we wish to execute our statement as
jdbc_user => "root"
jdbc_password => ""
# The path to our downloaded jdbc driver
jdbc_driver_library => "/home/Downloads/mysql-connector-java-5.1.38.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
#run logstash at an interval of on minute
schedule => "*/15 * * * *"
use_column_value => true
tracking_column => 'EVENT_TIME_OCCURRENCE_FIELD'
# our query
statement => "SELECT * FROM brainplay WHERE EVENT_TIME_OCCURRENCE_FIELD > :sql_last_value"
}
}
output {
stdout { codec => json_lines }
elasticsearch {
"hosts" => "localhost:9200"
"index" => "test-migrate"
"document_type" => "data"
"document_id" => "%{personid}"
}
}
But if data is large Logstash will check for new entries in entire data without any stopping point then this will reduce scalability and consume more power.
Is there any other method or any webhook like when new data is entered into database then mysql will notify Logstash only for new data or Logstash will check for only new entries, Please help
You can either use sql_last_start parameter in your query with any timestamp field (assuming that there is a timestamp field like last_updated).
For example, your query could be like,
WHERE last_updated >= :sql_last_start
From this answer,
For example, the first time you run this sql_last_start will be
1970-01-01 00:00:00 and you'll get all rows. The second run
sql_last_start will be (for example) 2015-12-03 10:55:00 and the query
will return all rows with a timestamp newer than that.
or you can read this answer on using :sql_last_value
WHERE last_updated > :sql_last_value
I have been working on this all evening and it is driving me crazy. It is supposed to be very simple but it is not working. This works with Oracle but not with MySQL and I created similar db.config that is fed to logstash using -f option.
input {
jdbc {
jdbc_driver_library => "/opt/elk/logstash-5.6.0/lib/mysql-connector-java-5.1.45-bin.jar"
jdbc_driver_class => "Java::com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://serverName:3306/dbName?verifyServerCertificate=false&useSSL=true"
jdbc_user => "userName"
jdbc_password => "PasswordValue"
statement => "select user_id, visitor_returning, config_os, visitor_days_since_last from visiting_table where user_id is not null"
#optional extras I use
type => "visit"
tags => ["awesome", "import"]
}
}
output{
stdout {codec => json_lines}
if [type] == "visit"{
elasticsearch{
hosts => "127.0.0.1"
index => "visitDb"
document_type => "visit_results"
}
}
stdout{}
}
Once I run the logstash, it is not loading the data into Elastic search index. I cannot even see the index named visitDb when I do the below.
curl 'localhost:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open .kibana Fg6P7XuHSTaonbKEbLcz5A 1 1 21 3 56.5kb 56.5kb
yellow open orderstotdb obxZ38prTFCG0W-BFTIhgw 5 1 60 0 245.4kb 245.4kb
y
I am unable to figure out what is going on with MySQL. I can see the console log retrieving the data in json format and writing to console, but the index does not reflect in ElasticSearch nor shows up in Kibana to create index pattern.
Can someone please help ??
Answering my own question in case any one else is having the same issue. Elastic Search rejects index names with upper case, since I had name visitDb with D capital, it was rejecting index name creation, go figure :) Hours lost in debugging different options.
I'm doing "elastic search getting started" tutorial. Unfortunatelly this tutorial doesn't cover first step which is importing csv database into elasticsearch.
I googled to find solution but it doesn't work unfortunatelly. Here is what I want to achieve and what I have:
I have a file with data which I want to import (simplified)
id,title
10,Homer's Night Out
12,Krusty Gets Busted
I would like to import it using logstash. After research over the internet I end up with following config:
input {
file {
path => ["simpsons_episodes.csv"]
start_position => "beginning"
}
}
filter {
csv {
columns => [
"id",
"title"
]
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
action => "index"
hosts => ["127.0.0.1:9200"]
index => "simpsons"
document_type => "episode"
workers => 1
}
}
I have a trouble with specifying document type so once data is imported and I navigate to http://localhost:9200/simpsons/episode/10 I expect to see result with episode 10.
Good job, you're almost there, you're only missing the document ID. You need to modify your elasticsearch output like this:
elasticsearch {
action => "index"
hosts => ["127.0.0.1:9200"]
index => "simpsons"
document_type => "episode"
document_id => "%{id}" <---- add this line
workers => 1
}
After this you'll be able to query episode with id 10
GET http://localhost:9200/simpsons/episode/10
I'm the author of moshe/elasticsearch_loader
I wrote ESL for this exact problem.
You can download it with pip:
pip install elasticsearch-loader
And then you will be able to load csv files into elasticsearch by issuing:
elasticsearch_loader --index incidents --type incident csv file1.csv
Additionally, you can use custom id file by adding --id-field=document_id to the command line
I am using logstash to index the different data in MySQL DB tables.
input
{
jdbc { jdbc_driver_library => "/opt/logstash/mysql-connector-java-5.1.39/mysql-connector-java-5.1.39-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://<ip number>:3306/database"
jdbc_user => "elastic"
jdbc_password => "password"
schedule => "* * * * *"
statement => "SELECT name, id, description from user_table"
}
}
output
{
elasticsearch {
index => "search"
document_type => "name"
document_id => "%{id}"
hosts => "127.0.0.1:9200"
}
#stdout { codec => json_lines }
}
The data is indexed properly but how do we keep the data in elastic search in sync with the data in the database tables as the data is updated by the application continuously. I just gave the example of one table but I have multiple tables for which I want to index the data. I searched for the answer but could not find the details.