Ruby Threads and MySQL connection - mysql

I'm trying the Ruby Threads, i have a simple script that need to iterate a json for getting certain data, all works fine but in one moment the shell shows:
`_query': This connection is in use by: #<Thread:0x00007f9c73973eb8#thread.rb:62 sleep> (Mysql2::Error)
How can i close that connection that i need for the data.
And the most important, the Threads are actually in the right way?
This is the script, it will run with a crontab:
require 'firebase'
require 'conekta'
require 'json'
require 'savon'
require "crack"
require 'active_support/core_ext/hash' #from_xml
require 'nokogiri'
require 'xmlsimple'
require 'mysql2'
class Cron
def generate_activation_code(size = 10)
charset = %w{ 1 2 3 4 5 6 7 8 9 A B C D E F G H I J K L M N O P Q R S T U V W X Y Z}
(0...size).map{ charset.to_a[rand(charset.size)] }.join
end
def construct()
base_uri = 'FIREBASE_URL'
file = File.open("FIREBASE_CREDENTIALS", "rb")
firebase = Firebase::Client.new(base_uri, file.read)
Conekta.locale = :es
Conekta.api_key = 'MY_KEY'
#response = firebase.get('users', nil)
#client = Savon.client(wsdl: 'MY_URL', ntlm: ["user", "pass"] , :convert_request_keys_to => :camelcase )
#client_mysql = Mysql2::Client.new(:host => "localhost", :username => "root", :password => "", :database => "masaldo_api")
end
def get_comision()
last_validity = #client_mysql.query("SELECT comision * 100 as comision FROM configuration")
last_validity.each do |validityr|
#comision = validityr["comision"]
end
end
def create_transaction(sku, token, phone, userid, card)
validity = #client_mysql.query("SELECT precio * 100 as precio_total, vigencia, descripcion, precio as precio_base FROM bluesoft_services_validity WHERE sku='#{sku}'")
validity.each do |row|
#vigencia = row["vigencia"]
#descipcion = row["descripcion"]
#precio = row["precio_total"]
#precio_base = row["precio_base"].to_i
end
if #vigencia.to_i > 0
last_current = #client_mysql.query("SELECT * FROM transactions WHERE number='#{phone}' ORDER BY trandate DESC LIMIT 1")
last_current.each do |last|
#trandate = last["trandate"]
#trandate_result = #trandate.strftime("%Y%m%d %H:%M:%S")
end
end
#last_with_validty = (#trandate + (#vigencia).to_i.day).strftime("%Y-%m-%d")
#today = (Time.now).strftime("%Y-%m-%d")
if #last_with_validty == #today
conekta_charges = Conekta::Order.create({
:currency => "MXN",
:customer_info => {
:customer_id => user['customer_id']
},
:line_items => [{
:name => #descipcion,
:unit_price => #precio.to_i,
:quantity => 1
},
{
:name => 'Comision de Recarga',
:unit_price => #comision.to_i,
:quantity => 1
}],
:charges => [{
:payment_method => {
:type => "card",
:payment_source_id => user['fav_card']
}
}]
})
if conekta_charges['payment_status'] == 'paid'
begin
response = #client.call(:venta, message: { 'sku' => 'TELCPA100MXN', 'fechaLocal' => '20180117 14:55:00', 'referencia' => '818181818181', 'monto' => '100', 'id_cadena' => '30', 'id_tienda' => '30', 'id_terminal' => '1', 'folio' => 'LUCOPCIHOW' })
parameters = response.body
parameters.each do |response, data|
if data[:return][:respuesta][:codigo_respuesta] == 0
puts data[:return][:respuesta]
else
puts data[:return][:respuesta]
end
end
rescue Exception => e
puts e.message
puts e.backtrace.inspect
end
end
end
end
def init()
threads = []
hash = #response.body
hash.each do |token , user|
threads << Thread.new do
#Check if user is current for transaction if not need to check agenda
if user['is_current']
self.create_transaction(user['sku'], token, user['phoneNumber'], user['customer_id'], user['fav_card'])
user['addressBook'].each do |userid , user_address_book|
if user['is_current']
replacements = { '+521' => '' }
phone_number = user_address_book['phoneNumber'].gsub(Regexp.union(replacements.keys), replacements)
self.create_transaction(user_address_book['sku'], token, phone_number, user['customer_id'], user_address_book['fav_card'])
end
end
end
end
end
threads.each { |t| t.join }
end
end
classCron = Cron.new()
classCron.construct()
classCron.get_comision()
classCron.init()
Regards

When doing multi-threaded code in Ruby you'll need to be careful about not sharing resources like database connections between threads unless the driver makes it abundantly clear that kind of operation is supported. The ones I'm familiar with don't, and Mysql2 is not thread safe that way.
You can use Thread[:db] to store a local database connection per-thread, like:
def db
Thread.current[:db] ||= Mysql2::Client.new(...)
end
Where then you can refer to it like this:
db.query(...)
That will automatically instantiate the connection as required.
It's worth noting that mysql2 is a low-level driver and isn't very pleasant to use. A higher level abstraction like Sequel provides a number of significant benefits: Migrations, (optional) model layer, and a very robust query builder with support for placeholder values and easy escaping.

Related

logstash-input-jdbc how to use utf-8 chars in statement

I use logstash-input-jdbc to sync my database to elasticsearch.
Env: (logstash 7.5, elasticsearch 7.5,mysql-connector-java-5.1.48.jar, logstash-input-jdbc-4.3.16)
materials.conf:
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/sc_education"
jdbc_driver_library => "connector/mysql-connector-java-5.1.48.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_user => "dauser"
jdbc_password => "daname"
jdbc_paging_enabled => "true"
jdbc_page_size => "50"
statement_filepath => "./materials.sql"
schedule => "* * * * *"
last_run_metadata_path => "./materials.info"
record_last_run => true
tracking_column => updated_at
codec => plain { charset => "UTF-8"}
# parameters => { "favorite_artist" => "Beethoven" }
# statement => "SELECT * from songs where artist = :favorite_artist"
}
}
filter {
json {
source => "message"
remove_field => ["message"]
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "materials"
document_id => "%{material_id}"
}
stdout {
codec => json_lines
}
}
materials.sql:
SELECT material_name,material_id,
CASE grade_id
WHEN grade_id = 1 THEN "一年级"
WHEN grade_id = 2 THEN "二年级"
WHEN grade_id = 3 THEN "三年级"
WHEN grade_id = 4 THEN "四年级"
WHEN grade_id = 5 THEN "五年级"
WHEN grade_id = 6 THEN "六年级"
WHEN grade_id = 7 THEN "初一"
WHEN grade_id = 8 THEN "初二"
WHEN grade_id = 9 THEN "初三"
WHEN grade_id = 10 THEN "高一"
WHEN grade_id = 11 THEN "高二"
WHEN grade_id = 12 THEN "高三"
ELSE "" END as grade,
CASE subject_id
WHEN subject_id = 1 THEN "数学"
WHEN subject_id = 2 THEN "物理"
WHEN subject_id = 3 THEN "化学"
WHEN subject_id = 4 THEN "语文"
WHEN subject_id = 5 THEN "英语"
WHEN subject_id = 6 THEN "科学"
WHEN subject_id = 7 THEN "音乐"
WHEN subject_id = 8 THEN "绘画"
WHEN subject_id = 9 THEN "政治"
WHEN subject_id = 10 THEN "历史"
WHEN subject_id = 11 THEN "地理"
WHEN subject_id = 12 THEN "生物"
WHEN subject_id = 13 THEN "奥数"
ELSE "" END as subject,
CASE course_term_id
WHEN course_term_id = 1 THEN "春"
WHEN course_term_id = 2 THEN "暑"
WHEN course_term_id = 3 THEN "秋"
WHEN course_term_id = 4 THEN "寒"
ELSE "" END as season,
created_at, updated_at from sc_materials where updated_at > :sql_last_value and material_id in (2025,317,2050);
./bin/logstash -f materials.conf
{"#version":"1","updated_at":"2019-08-19T02:04:54.000Z","season":"?","grade":"","created_at":"2019-08-19T02:04:54.000Z","#timestamp":"2019-12-13T01:02:01.907Z","material_name":"test material seri''al","material_id":2025,"subject":"??"}
{"#version":"1","updated_at":"2019-08-26T09:25:35.000Z","season":"","grade":"","created_at":"2019-08-26T09:25:35.000Z","#timestamp":"2019-12-13T01:02:01.908Z","material_name":"人教版高中英语必修三第10讲Unit5 Canada The True North语法篇A学生版2.pdf","material_id":2050,"subject":""}
{"#version":"1","updated_at":"2019-08-10T06:50:48.000Z","season":"?","grade":"","created_at":"2019-05-27T06:26:44.000Z","#timestamp":"2019-12-13T01:02:01.880Z","material_name":"90aca2238832143fb75dcf0fe6dbbfa9.pdf","material_id":317,"subject":""}
The chinese chars in db works well, but the chinese chars in statement becomes chars ?.
for me, characterEncoding=utf8 was not working.
after added this,
stdin {
codec => plain { charset => "UTF-8"}
}
works well.
here is my working conf file.
It's a bit of a time to post an answer, but I hope it helps someone.
input {
jdbc {
jdbc_connection_string => "jdbc:postgresql://localhost:5432/atlasdb?useTimezone=true&useLegacyDatetimeCode=false&serverTimezone=UTC&useSSL=false&useUnicode=true&characterEncoding=utf8"
jdbc_user => "atlas"
jdbc_password => "atlas"
jdbc_validate_connection => true
jdbc_driver_library => "/lib/postgres-42-test.jar"
jdbc_driver_class => "org.postgresql.Driver"
schedule => "* * * * *"
statement => "SELECT * from naver_city"
}
stdin {
codec => plain { charset => "UTF-8"}
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "2020-04-23-2"
doc_as_upsert => true
action => "update"
document_id => "%{code}"
}
stdout { codec => rubydebug }
}
I have encountered this problem when use query contain Japanese character.
You could change jdbc_connection_string in materials.conf
<i>
jdbc_connection_string => "jdbc:mysql://localhost:3306/sc_education?useSSL=false&useUnicode=true&characterEncoding=utf8"
</i>
Restart logstash

Better way to extract json map into struct

I'm new to elixir and I want to parse a json file. One of the parts is a question answer array of objects.
[
{
"questionId":1,
"question":"Information: Personal Information: First Name",
"answer":"Joe"
},
{
"questionId":3,
"question":"Information: Personal Information: Last Name",
"answer":"Smith"
},
...
]
I know what questionId's I want and I'm going to make a map for 1 = First Name, 2 = Last Name.
But currently I'm doing the following to put the data into the struct.
defmodule Student do
defstruct first_name: nil, last_name: nil, student_number: nil
defguard is_first_name(id) when id == 1
defguard is_last_name(id) when id == 3
defguard is_student_number(id) when id == 7
end
defmodule AFMC do
import Student
#moduledoc """
Documentation for AFMC.
"""
#doc """
Hello world.
## Examples
iex> AFMC.hello
:world
"""
def main do
get_json()
|> get_outgoing_applications
end
def get_json do
with {:ok, body} <- File.read("./lib/afmc_import.txt"),
{:ok,body} <- Poison.Parser.parse(body), do: {:ok,body}
end
def get_outgoing_applications(map) do
{:ok,body} = map
out_application = get_in(body,["outgoingApplications"])
Enum.at(out_application,0)
|> get_in(["answers"])
|> get_person
end
def get_person(answers) do
student = Enum.reduce(answers,%Student{},fn(answer,acc) ->
if Student.is_first_name(answer["questionId"]) do
acc = %{acc | first_name: answer["answer"]}
end
if Student.is_last_name(answer["questionId"]) do
acc = %{acc | last_name: answer["answer"]}
end
if Student.is_student_number(answer["questionId"]) do
acc = %{acc | student_number: answer["answer"]}
end
acc
end)
IO.inspect "test"
s
end
end
I'm wondering what is a better way to do get_person with out having to do if statements. If I know I will be mapping 1 to questionId 1 in the array of objects.
The data will then be saved into a DB.
Thanks
I'd store a mapping of id to field name. With that you don't need any if inside the reduce. Some pattern matching will also make it unnecessary to do answer["questionId"] etc.
defmodule Student do
defstruct first_name: nil, last_name: nil, student_number: nil
#fields %{
1 => :first_name,
3 => :last_name,
7 => :student_number
}
def parse(answers) do
Enum.reduce(answers, %Student{}, fn %{"questionId" => id, "answer" => answer}, acc ->
%{acc | #fields[id] => answer}
end)
end
end
IO.inspect(
Student.parse([
%{"questionId" => 1, "question" => "", "answer" => "Joe"},
%{"questionId" => 3, "question" => "", "answer" => "Smith"},
%{"questionId" => 7, "question" => "", "answer" => "123"}
])
)
Output:
%Student{first_name: "Joe", last_name: "Smith", student_number: "123"}
Edit: to skip ids not present in the map, change:
%{acc | #fields[id] => answer}
to:
if field = #fields[id], do: %{acc | field => answer}, else: acc

Store the output of results.each into array in ruby on rails

Code:
{ db = Mysql2::Client.new( :host => 'localhost', :username => 'username',
password => 'password', :database => 'database')
results = db.query("select * from users where exported is not TRUE OR
NULL").each(:as => :array)
results.each { | row | puts row[1]}
The results.each line outputs outputs company data, and I want to use each line as an input within an API call. Any ideas as how to do this? Each row should populate an attribute like below.
"requested_item_value_attributes" => {
"employee_first_name_6000555821" => 'results.each { | row | puts row[0]}',
"employee_last_name_6000555821" => "results.each { | row | puts row[1]}",
"hiring_manager_6000555821" => "results.each { | row | puts row[2]}",
"job_title" => "results.each { | row | puts row[3]}",
"start_date" => "#results.each { | row | puts row[4]}"
}
You can use
nameArray = Array.new
nameArray.push(nameToSave)
to add the variable nameToSave to the end of the array nameArray.
Just call push for each of your results and you have an array with all your names from your query.
Use [Array#map] to map the results to an array:
results.map do |row|
"requested_item_value_attributes" => {
"employee_first_name_6000555821" => row[0],
"employee_last_name_6000555821" => row[1],
"hiring_manager_6000555821" => row[2],
"job_title" => row[3],
"start_date" => row[4]
}
}
or, even better:
results.map do |row|
"requested_item_value_attributes" =>
%w[
employee_first_name_6000555821,
employee_last_name_6000555821,
hiring_manager_6000555821,
job_title,
start_date
].zip(row.take(5)).to_h
}
}
Use the query method second argument.
results = []
db.query('SELECT * FROM table', results)

How to use Ruby's CSV.parse to insert all columns of data into SQL?

I'm having trouble with parsing my ICD Code CSV file. The 'descriptions' are saving correctly, but the codes are not being inserted alongside them. I've tried multiple ways to process the file, and I cannot get both columns and all of their rows into their respective database entries. My code is below.
Seeds.rb
require 'csv'
icd_codes = File.read(Rails.root.join('lib', 'seeds', 'icd10cm_order_2017.csv'))
icd_codes = CSV.parse(icd_codes, :headers => true, :encoding => 'ISO-8859-1')
icd_codes.each do |row|
t = Icd.new
t.code = row['Code']
t.description = row['Description']
t.save
puts "#{t.code}, #{t.description} saved"
end
Schema.rb
create_table "icds", force: :cascade do |t|
t.string "code"
t.text "description"
t.datetime "created_at", null: false
t.datetime "updated_at", null: false
end
My CSV File
Code Description
A00 Cholera
A000 Cholera due to Vibrio cholerae 01, biovar cholerae
A001 Cholera due to Vibrio cholerae 01, biovar eltor
A009 Cholera, unspecified
A01 Typhoid and paratyphoid fevers
A010 Typhoid fever
Rails Console output
[#<Icd id: 1, code: nil, description: "Cholera", created_at: "2017-01-04 19:18:31", updated_at: "2017-01-04 19:18:31">
To solve this problem, I used the position of the row in the array, instead of the header name
require 'csv'
icd_codes = File.read(Rails.root.join('lib', 'seeds', 'icd10cm_order_2017.csv'))
icd_codes = CSV.parse(icd_codes, :headers => true, :encoding => 'ISO-8859-1')
icd_codes.each do |row|
t = Icd.new
t.code = row[0]
t.description = row[1]
t.save!
puts "#{t.code}, #{t.description} saved"
end

Why can I not update the foreign key of an object in a 1:1 association?

I have two models, User and Profile.
A User has_one Profile and a Profile belongs_to User.
Correspondingly, the Profile model has a user_id attribute.
The association works:
p = Profile.first
=> #<Profile id: 1, name: "Jack", ... , user_id: 1>
u = User.first
=> #<User id: 1, email: "jack#example.com", ... >
u.profile.id
=> 1
p.user.id
=> 1
p.user == u
=> true
u.profile == p
=> true
I can set the user_id field on a Profile directly:
p.user_id = 2
=> 2
p.save!
=> true
p.user_id
=> 2
But why can I not set the user_id like this:
u.profile.user_id = 2
=> 2
u.profile.save!
=> 2
u.profile.user_id
=> 1
You must refresh u.profile object. Try this:
u.profile.user_id = 2
=> 2
u.profile.save!
=> 2
u.profile.reload.user_id
=> 2
This is because original profile object is still loaded on memory in u.
Hope this help :)