Proper way to (get from /insert into) table using Erlang Mysql Driver - mysql

I am trying to get erlang-mysql-driver working, I managed to set it up and make queries but there are two things I cannot do.(https://code.google.com/archive/p/erlang-mysql-driver/issues)
(BTW, I am new to Erlang)
So Here is my code to connect MySQL.
<erl>
out(Arg) ->
mysql:start_link(p1, "127.0.0.1", "root", "azzkikr", "MyDB"),
{data, Result} = mysql:fetch(p1, "SELECT * FROM messages").
</erl>
1. I cannot get data from table.
mysql.erl doesn't contain any specific information on how to get table datas but this is the farthest I could go.
{A,B} = mysql:get_result_rows(Result),
B.
And the result was this:
ERROR erlang code threw an uncaught exception:
File: /Users/{username}/Sites/Yaws/index.yaws:1
Class: error
Exception: {badmatch,[[4,0,<<"This is done baby!">>,19238],
[5,0,<<"Success">>,19238],
[6,0,<<"Hello">>,19238]]}
Req: {http_request,'GET',{abs_path,"/"},{1,1}}
Stack: [{m181,out,1,
[{file,"/Users/{username}/.yaws/yaws/default/m181.erl"},
{line,18}]},
{yaws_server,deliver_dyn_part,8,
[{file,"yaws_server.erl"},{line,2818}]},
{yaws_server,aloop,4,[{file,"yaws_server.erl"},{line,1232}]},
{yaws_server,acceptor0,2,[{file,"yaws_server.erl"},{line,1068}]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,240}]}]
I understand that somehow I need to get second element and use foreach to get each data but strings are returned in different format like queried string is Success but returned string is <<"Success">>.
{badmatch,[[4,0,<<"This is done baby!">>,19238],
[5,0,<<"Success">>,19238],
[6,0,<<"Hello">>,19238]]}
First Question is: How do I get datas from table?
2. How to insert values into table using variables?
I can insert data into table using this method:
Msg = "Hello World",
mysql:prepare(add_message,<<"INSERT INTO messages (`message`) VALUES (?)">>),
mysql:execute(p1, add_message, [Msg]).
But there are two things I am having trouble,
1. I am inserting data without << and >> operators, because When I do Msg = << ++ "Hello World" >>, erlang throws out an exception (I think I am doing something wrong), i don't know wether they are required but without them I am able to insert data into table except this error bothers me after execution:
yaws code at /Users/{username}/Yaws/index.yaws:1 crashed or ret bad val:{updated,
{mysql_result,
[],
[],
1,
[]}}
Req: {http_request,'GET',{abs_path,"/"},{1,1}}
returned atom is updated while I commanded to insert data.
Question 2 is: How do I insert data into table in a proper way?

Error:
{badmatch,[[4,0,<<"This is done baby!">>,19238],
[5,0,<<"Success">>,19238],
[6,0,<<"Hello">>,19238]]}
Tells you that returned values is:
[[4,0,<<"This is done baby!">>,19238],
[5,0,<<"Success">>,19238],
[6,0,<<"Hello">>,19238]]
Which obviously can't match with either {data, Data} nor {A, B}. You can obtain your data as:
<erl>
out(Arg) ->
mysql:start_link(p1, "127.0.0.1", "root", "azzkikr", "MyDB"),
{ehtml,
[{table, [{border, "1"}],
[{tr, [],
[{td, [],
case Val of
_ when is_binary(Val) -> yaws_api:htmlize(Val);
_ when is_integer(val) -> integer_to_binary(Val)
end}
|| Val <- Row
]}
|| Row <- mysql:fetch(p1, "SELECT * FROM messages")
]}
]
}.
</erl>

Related

Python 3 psycopg2 COPY from stdin failed: error in .read()

I am trying to apply the code found on this page, in particular part 'Copy Data from String Iterator' of the Table of Contents, but run into an issue with my code.
Since not all lines coming from the generator (here log_lines) can be imported into the PostgreSQL database, I try to filter the correct lines (here row) using itertools.filterfalse like in the codeblock below:
def copy_string_iterator(connection, log_lines) -> None:
with connection.cursor() as cursor:
create_staging_table(cursor)
log_string_iterator = StringIteratorIO((
'|'.join(map(clean_csv_value, (
row['date'],
row['time'],
row['cs_uri_query'],
row['s_contentpath'],
row['sc_status'],
row['s_computername'],
...
row['sc_substates'],
row['s_port'],
row['cs_version'],
row['c_protocol'],
row.update({'cs_cookie':'x'}),
row['timetakenms'],
row['cs_uri_stem'],
))) + '\n')
for row in filterfalse(lambda line: "#" in line.get('date'), log_lines)
)
cursor.copy_from(log_string_iterator, 'log_table', sep = '|')
When I run this, cursor.copy_from() gives me the following error:
QueryCanceled: COPY from stdin failed: error in .read() call
CONTEXT: COPY log_table, line 112910
I understand why this error happens, it is because in the test file I use there are only 112909 lines that meet the filterfalse condition. But why does it try to copy line 112910 and throw the error and not just stop?
Since Python doesn't have a coalescing operator, add something like:
(map(clean_csv_value, (
row['date'] if 'date' in row else None,
:
row['cs_uri_stem'] if 'cs_uri_stem' in row else None,
))) + '\n')
for each of your fields so you can handle any missing fields in the JSON file. Of course the fields should be nullable in the db if you use None otherwise replace with None with some default value for that field.

Ruby: Handling different JSON response that is not what is expected

Searched online and read through the documents, but have not been able to find an answer. I am fairly new and part of learning Ruby I wanted to make the script below.
The Script essentially does a Carrier Lookup on a list of numbers that are provided through a CSV file. The CSV file has just one row with the column header "number".
Everything runs fine UNTIL the API gives me an output that is different from the others. In this example, it tells me that one of the numbers in my file is not a valid US number. This then causes my script to stop running.
I am looking to see if there is a way to either ignore it (I read about Begin and End, but was not able to get it to work) or ideally either create a separate file with those errors or just put the data into the main file.
Any help would be much appreciated. Thank you.
Ruby Code:
require 'csv'
require 'uri'
require 'net/http'
require 'json'
number = 0
CSV.foreach('data1.csv',headers: true) do |row|
number = row['number'].to_i
uri = URI("https://api.message360.com/api/v3/carrier/lookup.json?PhoneNumber=#{number}")
req = Net::HTTP::Post.new(uri)
req.basic_auth 'XXX' , 'XXX'
res = Net::HTTP.start(uri.hostname, uri.port, :use_ssl => true) {|http|
http.request(req)
}
json = JSON.parse(res.body)
new = json["Message360"]["Carrier"].values
CSV.open("new.csv", "ab") do |csv|
csv << new
end
end
File Data:
number
5556667777
9998887777
Good Response example in JSON:
{"Message360"=>{"ResponseStatus"=>1, "Carrier"=>{"ApiVersion"=>"3", "CarrierSid"=>"XXX", "AccountSid"=>"XXX", "PhoneNumber"=>"+19495554444", "Network"=>"Cellco Partnership dba Verizon Wireless - CA", "Wireless"=>"true", "ZipCode"=>"92604", "City"=>"Irvine", "Price"=>0.0003, "Status"=>"success", "DateCreated"=>"2018-05-15 23:05:15"}}}
The response that causes Script to stop:
{
"Message360": {
"ResponseStatus": 0,
"Errors": {
"Error": [
{
"Code": "ER-M360-CAR-111",
"Message": "Allowed Only Valid E164 North American Numbers.",
"MoreInfo": []
}
]
}
}
}
It would appear you can just check json["Message360"]["ResponseStatus"] first for a 0 or 1 to indicate failure or success.
I'd probably add a rescue to help catch any other errors (malformed JSON, network issue, etc.)
CSV.foreach('data1.csv',headers: true) do |row|
number = row['number'].to_i
...
json = JSON.parse(res.body)
if json["Message360"]["ResponseStatus"] == 1
new = json["Message360"]["Carrier"].values
CSV.open("new.csv", "ab") do |csv|
csv << new
end
else
# handle bad response
end
rescue StandardError => e
# request failed for some reason, log e and the number?
end

How to add appending data to MySQL through R?

I am running an R script that adds data to mySQL database. I usually format the data and add it as as it comes in after couple of hours (data string is not continuous). My first set of data was added properly in MySQL database. The second string of data can not be added properly.
con = dbConnect(MySQL(), user='root', password='xxxxxx', dbname='test', host='localhost')
dbWriteTable(con, 'Tables', value = parseTweets(filterStream(file.name= "", track=c("lebron"), timeout=10, oauth=my_oauth)))
When I rerun the last code (dbWriteTable) again, it gives me following error
Error: Error in .local(conn, statement, ...) :
could not run statement: Table 'tables' already exists
I also used
dbWriteTable(con, 'Tables', value = parseTweets(filterStream(file.name= "", track=c("lebron"), timeout=10, oauth=my_oauth)), append = TRUE)
but it provides the same error
For some reason giving setting append to TRUE is not working. Instead give it a number. See the code below for better understanding
dbWriteTable(con, 'Tables', value = parseTweets(filterStream(file.name= "", track=c("lebron"), timeout=100, oauth=my_oauth)), overwrite = 0, row.names = 0, append = 1)

Converting epgsql results to JSON

I am a total beginner with Erlang and functional programming in general. For fun, to get me started, I am converting an existing Ruby Sinatra REST(ish) API that queries PostgreSQL and returns JSON.
On the Erlang side I am using Cowboy, Epgsql and Jiffy as the JSON library.
Epgsql returns results in the following format:
{ok, [{column,<<"column_name">>,int4,4,-1,0}], [{<<"value">>}]}
But Jiffy expects the following format when encoding to JSON:
{[{<<"column_name">>,<<"value">>}]}
The following code works to convert epgsql output into suitable input for jiffy:
Assuming Data is the Epgsql output and Key is the name of the JSON object being created:
{_, C, R} = Data,
Columns = [X || {_, X, _, _, _, _} <- C,
Rows = tuple_to_list(hd(R)),
Result = {[{atom_to_binary(Key, utf8), {lists:zip(Columns, Rows)}}]}.
However, I am wondering if this is efficient Erlang?
I've looked into the documentation for Epgsql and Jiffy and can't see any more obvious ways to perform the conversion.
Thank you.
Yes, need parse it.
For example function parse result
parse_result({error, #error{ code = <<"23505">>, extra = Extra }}) ->
{match, [Column]} =
re:run(proplists:get_value(detail, Extra),
"Key \\(([^\\)]+)\\)", [{capture, all_but_first, binary}]),
throw({error, {non_unique, Column}});
parse_result({error, #error{ message = Msg }}) ->
throw({error, Msg});
parse_result({ok, Cols, Rows}) ->
to_map(Cols, Rows);
parse_result({ok, Counts, Cols, Rows}) ->
{ok, Counts, to_map(Cols, Rows)};
parse_result(Result) ->
Result.
And function convert result to map
to_map(Cols, Rows) ->
[ maps:from_list(lists:zipwith(fun(#column{name = N}, V) -> {N, V} end,
Cols, tuple_to_list(Row))) || Row <- Rows ].
And encode it to json. You can change my code and make output as proplist.

RMySQL dbWriteTable adding columns to table (dynamically?)

I just started using the R package called RMySQL in order to get around some memory limitations on my computer. I am trying to take a matrix with 100 columns in R (called data.df), then make a new table on an SQL database that has "100 choose 2" (=4950) columns, where each column is a linear combination of two columns from the initial matrix. So far I have something like this:
countnumber <- 1
con <- dbConnect(MySQL(), user = "root", password = "password", dbname = "myDB")
temp <- as.data.frame(data.df[,1] - data.df[,2])
colnames(temp) <- paste(pairs[[countnumber]][1], pairs[[countnumber]][2], sep = "")
dbWriteTable(con, "spreadtable", temp, row.names=T, overwrite = T)
for(i in 1:(n-1)){
for(j in (i+1):n){
if(!((i==1)&&(j==2))){ #this part excludes the first iteration already taken care of
temp <- as.data.frame(data.df[,i] - data.df[,j])
colnames(temp) <- "hola"
dbWriteTable(con, "spreadtable", value = temp, append = TRUE, overwrite = FALSE, row.names = FALSE)
countnumber <- countnumber + 1
}
}
}
I've also tried toying around with the "field.types" argument of RMySQL::dbWriteTable(), which was suggested at RMySQL dbWriteTable with field.types. Sadly it hasn't helped me out too much.
Questions:
Is making your own sql database a valid solution to the memory-bound nature of R, even if it has 4950 columns?
Is the dbWriteTable() the proper function to be using here?
Assuming the answer is "yes" to both of the previous questions...why isn't this working?
Thanks for any help.
[EDIT]: code with error output:
names <- as.data.frame(index)
names <- t(names)
#dim(names) is 1 409
con <- dbConnect(MySQL(), user = "root", password = "password", dbname = "taylordatabase")
dbGetQuery(con, dbBuildTableDefinition(MySQL(), name="spreadtable", obj=names, row.names = F))
#I would prefer these to be double types with 8 decimal spaces instead of text
#dim(temp) is 1 409
temp <- as.data.frame(data.df[,1] - (ratios[countnumber]*data.df[,2]))
temp <- t(temp)
temp <- as.data.frame(temp)
dbWriteTable(con, name = "spreadtable", temp, append = T)
The table is created successfully in the database (I will change variable type later), but the dbWriteTable() line produces the error:
Error in mysqlExecStatement(conn, statement, ...) :
RS-DBI driver: (could not run statement: Unknown column 'row_names' in 'field list')
[1] FALSE
Warning message:
In mysqlWriteTable(conn, name, value, ...) : could not load data into table
If I make a slight change, I get a different error message:
dbWriteTable(con, name = "spreadtable", temp, append = T, row.names = F)
and
Error in mysqlExecStatement(conn, statement, ...) :
RS-DBI driver: (could not run statement: Unknown column 'X2011_01_03' in 'field list')
[1] FALSE
Warning message:
In mysqlWriteTable(conn, name, value, ...) : could not load data into table
I just want to use "names" as a bunch of column labels. They were initially dates. The actual data I would like to be "temp."
Having a query with 4950 rows is ok, the problem is that what columns you need.
If you always "select * ", you will eventually exhaust all you system memory (in the case that the table has 100 columns)
Why not give us some error message if you have encountered any problems ?