I'm trying to save some data in to a table. I get the data from a database and it works ok.
My problem is that the data is not saved in the table. It is a lua table like table = {} and NOT a database table.
Maybe it is saved but it looks like the prints are done before the saving even though I call them after. In fact it seems like my network request is done last in my program even though I call it first.
I would real like to know the reason for this. Any ideas?
Here is the code:
---TESTING!
print("Begin teting!")
--hej = require ( "test2" )
local navTable = {
Eng_Spd = 0,
Spd_Set = 0
}
local changeTab = function()
navTable.Eng_Spd = 2
end
printNavTable = function()
print("navTable innehåller: ")
print(navTable.Eng_Spd)
print(navTable.Spd_Set)
end
require "sqlite3"
local myNewData
local json = require ("json")
local decodedData
local SaveData2 = function()
local i = 1
local counter = 1
local index = "livedata"..counter
local navValue = decodedData[index]
print(navValue)
while (navValue ~=nil) do
--tablefill ="INSERT INTO navaltable VALUES (NULL,'" .. navValue[1] .. "','" .. navValue[3] .."','" .. navValue[4] .."','" .. navValue[5] .."','" .. navValue[6] .."');"
--print(tablefill)
--db:exec(tablefill)
if navValue[3] == "Eng Spd" then navTable.Eng_Spd = navValue[4]
elseif navValue[3] == "Spd Set" then navTable.Spd_Set = navValue[4]
else print("blah")
end
print(navTable.Eng_Spd)
print(navTable.Spd_Set)
counter=counter+1
index = "livedata"..counter
navValue = decodedData[index]
end
end
local function networkListener( event )
if (event.isError) then
print("Network error!")
else
myNewData = event.response
print("From server: "..myNewData)
decodedData = (json.decode(myNewData))
SaveData2()
--db:exec("DROP TABLE IN EXISTS navaltable")
end
end
--function uppdateNavalTable()
network.request( "http://127.0.0.1/firstMidle.php", "GET", networkListener )
--end
changeTab()
printNavTable()
--uppdateNavalTable()
printNavTable()
print("Done!")
And here is the output:
Copyright (C) 2009-2012 C o r o n a L a b s I n c .
Version: 2.0.0
Build: 2012.971
Begin teting!
navTable innehåller:
2
0
navTable innehåller:
2
0
Done!
From server: {"livedata1":["1","0","Eng Spd","30","0","2013-03-15 11:35:48"],"li
vedata2":["1","1","Spd Set","13","0","2013-03-15 11:35:37"]}
table: 008B5018
30
0
30
13
And btw, navTable innehåller means navTable contains.
The answer is that networklistener run parallell with the rest of the code.
Related
I encountered this bug in cjson lua when I was using a script in redis 3.2 to set a particular value in a json object.
Currently, the lua in redis does not differentiate between an empty json array or an empty json object. Which causes serious problems when serialising json objects that have arrays within them.
eval "local json_str = '{\"items\":[],\"properties\":{}}' return cjson.encode(cjson.decode(json_str))" 0
Result:
"{\"items\":{},\"properties\":{}}"
I found this solution https://github.com/mpx/lua-cjson/issues/11 but I wasn't able to implement in a redis script.
This is an unsuccessful attempt :
eval
"function cjson.mark_as_array(t)
local mt = getmetatable(t) or {}
mt.__is_cjson_array = true
return setmetatable(t, mt)
end
function cjson.is_marked_as_array(t)
local mt = getmetatable(t)
return mt and mt.__is_cjson_array end
local json_str = '{\"items\":[],\"properties\":{}}'
return cjson.encode(cjson.decode(json_str))"
0
Any help or pointer appreciated.
There are two plans.
Modify the lua-cjson source code and compile redis, click here for details.
Fix by code:
local now = redis.call("time")
-- local timestamp = tonumber(now[1]) * 1000 + math.floor(now[2]/1000)
math.randomseed(now[2])
local emptyFlag = "empty_" .. now[1] .. "_" .. now[2] .. "_" .. math.random(10000)
local emptyArrays = {}
local function emptyArray()
if cjson.as_array then
-- cjson fixed: https://github.com/xiyuan-fengyu/redis-lua-cjson-empty-table-fix
local arr = {}
setmetatable(arr, cjson.as_array)
return arr
else
-- plan 2
local arr = {}
table.insert(emptyArrays, arr)
return arr
end
end
local function toJsonStr(obj)
if #emptyArrays > 0 then
-- plan 2
for i, item in ipairs(emptyArrays) do
if #item == 0 then
-- empty array, insert a special mark
table.insert(item, 1, emptyFlag)
end
end
local jsonStr = cjson.encode(obj)
-- replace empty array
jsonStr = (string.gsub(jsonStr, '%["' .. emptyFlag .. '"]', "[]"))
for i, item in ipairs(emptyArrays) do
if item[1] == emptyFlag then
table.remove(item, 1)
end
end
return jsonStr
else
return cjson.encode(obj)
end
end
-- example
local arr = emptyArray()
local str = toJsonStr(arr)
print(str) -- "[]"
This question is about making any nnGraph network run on multiple GPUs and not specific to the following network instance
I am trying to train a network which is constructed with nnGraph. The backward diagram is attached. I am trying to run the parallelModel (see code or fig Node 9) in a multi-GPU setting. If I attach the parallel model to a nn.Sequential container and then create a DataParallelTable it works in a multi-GPU setting (without nnGraph). However, after attaching it to nnGraph I get an error. The backward pass works if I train on a single GPU (setting true to false in the if statements), but in a multi-GPU setting I get an error "gmodule.lua:418: attempt to index local 'gradInput' (a nil value)". I think Node 9 in backward pass should run on multiple-GPUs, however that's not happening. Creating DataParallelTable on nnGraph didn't work for me, however I thought atleast putting internal Sequential networks in a DataParallelTable would work. Is there some other way to split the initial data which is being passed to nnGraph so that it runs on multiple-GPUs?
require 'torch'
require 'nn'
require 'cudnn'
require 'cunn'
require 'cutorch'
require 'nngraph'
data1 = torch.ones(4,20):cuda()
data2 = torch.ones(4,10):cuda()
tmodel = nn.Sequential()
tmodel:add(nn.Linear(20,10))
tmodel:add(nn.Linear(10,10))
parallelModel = nn.ParallelTable()
parallelModel:add(tmodel)
parallelModel:add(nn.Identity())
parallelModel:add(nn.Identity())
model = parallelModel
if true then
local function sharingKey(m)
local key = torch.type(m)
if m.__shareGradInputKey then
key = key .. ':' .. m.__shareGradInputKey
end
return key
end
-- Share gradInput for memory efficient backprop
local cache = {}
model:apply(function(m)
local moduleType = torch.type(m)
if torch.isTensor(m.gradInput) and moduleType ~= 'nn.ConcatTable' then
local key = sharingKey(m)
if cache[key] == nil then
cache[key] = torch.CudaStorage(1)
end
m.gradInput = torch.CudaTensor(cache[key], 1, 0)
end
end)
end
if true then
cudnn.fastest = true
cudnn.benchmark = true
-- Wrap the model with DataParallelTable, if using more than one GPU
local gpus = torch.range(1, 2):totable()
local fastest, benchmark = cudnn.fastest, cudnn.benchmark
local dpt = nn.DataParallelTable(1, true, true)
:add(model, gpus)
:threads(function()
local cudnn = require 'cudnn'
cudnn.fastest, cudnn.benchmark = fastest, benchmark
end)
dpt.gradInput = nil
model = dpt:cuda()
end
newmodel = nn.Sequential()
newmodel:add(model)
input1 = nn.Identity()()
input2 = nn.Identity()()
input3 = nn.Identity()()
out = newmodel({input1,input2,input3})
r1 = nn.NarrowTable(1,2)(out)
r2 = nn.NarrowTable(2,2)(out)
f1 = nn.JoinTable(2)(r1)
f2 = nn.JoinTable(2)(r2)
n1 = nn.Sequential()
n1:add(nn.Linear(20,5))
n2 = nn.Sequential()
n2:add(nn.Linear(20,5))
f11 = n1(f1)
f12 = n2(f2)
foutput = nn.JoinTable(2)({f11,f12})
g = nn.gModule({input1,input2,input3},{foutput})
g = g:cuda()
g:forward({data1, data2, data2})
g:backward({data1, data2, data2}, torch.rand(4,10):cuda())
The code in the "if" statements is taken from Facebook's ResNet implementation
I have a MYSQL server and MYSQL-PROXY and I am trying to manipualte the results I send to the client as a response to a SELECT query. I have writen this code in lua:
function string.starts(String,Start)
return string.sub(String,1,string.len(Start))==Start
end
function read_query_result(inj)
local fn = 1
local fields = inj.resultset.fields
while fields[fn] do
fn = fn + 1
end
fn = fn - 1
print("FIELD NUMBER: " .. fn)
for row in inj.resultset.rows do
print ("--------------")
for i = 1, fn do
if (string.starts(fields[i].name,"TEST")) then
row[i]="TESTED"
end
print ("DATA: " .. fields[i].name .. " -> " .. row[i])
end
end
return proxy.PROXY_SEND_RESULT
end
I can correctly read the field names and values. I can detect the condition where I want the result modified, but I can not get the data sent to the client.
I see two problems:
I am setting the value in the local row variable, but I have not found the way to set the real resultset (inj.Resultset.row[i] or something similar).
There is something wrong with return proxy.PROXY_SEND_RESULT, because I am seeing that whenever I comment that sentence I see the results, and If I uncomment it I get an error.
I have not found example code as a reference.
Ok. Solved.
Data has to be inserted in a table
PROXY_SEND_RESULT requires proxy.response.type to be set.
This is the correct module:
function read_query_result(inj)
local fn = 1
local fields = inj.resultset.fields
proxy.response.resultset = {fields = {}, rows = {}}
while fields[fn] do
table.insert(proxy.response.resultset.fields, {type = proxy.MYSQL_TYPE_STRING, name = fields[fn].name})
fn = fn + 1
end
fn = fn - 1
for row in inj.resultset.rows do
for i = 1, fn do
if (string.starts(fields[i].name,"TEST")) then
row[i]="TESTED"
end
end
table.insert(proxy.response.resultset.rows, row )
end
proxy.response.type = proxy.MYSQLD_PACKET_OK
return proxy.PROXY_SEND_RESULT
end
I am currently writing a script that downloads a bunch of .csv's from a FTP server, and then puts each .csv in a MySQL database as its own table.
I download the .csv's from the FTP using RCurl and place all of the .csv's in my working directory. To create tables out of each .csv, I am using the sqlSave function from the RODBC package, where the table name is the same name as the .csv. This works fine whenever a .csv name is less than 18 characters, but when it is greater it fails. And by "fails", I mean R crashes. To track down the bug, I called debug on sqlSave.
I found that there are at least two functions that sqlSave calls that cause R to crash. The first is RODBC:::odbcTableExists, which is a non-visible function. Here is the code for the function:
RODBC:::odbcTableExists
function (channel, tablename, abort = TRUE, forQuery = TRUE,
allowDot = attr(channel, "interpretDot"))
{
if (!odbcValidChannel(channel))
stop("first argument is not an open RODBC channel")
if (length(tablename) != 1)
stop(sQuote(tablename), " should be a name")
tablename <- as.character(tablename)
switch(attr(channel, "case"), nochange = {
}, toupper = tablename <- toupper(tablename), tolower = tablename <- tolower(tablename))
isExcel <- odbcGetInfo(channel)[1L] == "EXCEL"
hasDot <- grepl(".", tablename, fixed = TRUE)
if (allowDot && hasDot) {
parts <- strsplit(tablename, ".", fixed = TRUE)[[1]]
if (length(parts) > 2)
ans <- FALSE
else {
res <- if (attr(channel, "isMySQL"))
sqlTables(channel, catalog = parts[1], tableName = parts[2])
else sqlTables(channel, schema = parts[1], tableName = parts[2])
ans <- is.data.frame(res) && nrow(res) > 0
}
}
else if (!isExcel) {
res <- sqlTables(channel, tableName = tablename)
ans <- is.data.frame(res) && nrow(res) > 0
}
else {
res <- sqlTables(channel)
tables <- stables <- if (is.data.frame(res))
res[, 3]
else ""
if (isExcel) {
tables <- sub("^'(.*)'$", "\\1", tables)
tables <- unique(c(tables, sub("\\$$", "", tables)))
}
ans <- tablename %in% tables
}
if (abort && !ans)
stop(sQuote(tablename), ": table not found on channel")
enc <- attr(channel, "encoding")
if (nchar(enc))
tablename <- iconv(tablename, to = enc)
if (ans && isExcel) {
dbname <- if (tablename %in% stables)
tablename
else paste(tablename, "$", sep = "")
if (forQuery)
paste("[", dbname, "]", sep = "")
else dbname
}
else if (ans) {
if (forQuery && !hasDot)
quoteTabNames(channel, tablename)
else tablename
}
else character(0L)
}
This fails here when the table name over 18 characters in length:
res <- sqlTables(channel, tableName = tablename)
I have fixed it by changing this to:
res <- sqlTables(channel, tablename)
I then reassign the function with the same name (odbcTableExists) in the namespace with this code change using assignInNamepace.
RODBC:::odbcTableExists no longer causes an issue. However, R still crashes when sqlwrite is called from within sqlSave(). I called debug on sqlwrite, and I found that RODBC:::odbcColumns (another non-visible function) causes that to crash when tablenames are too long. Unfortunately, I am not sure how to change RODBC:::odbcColumns to avoid the bug like I did before.
I am using R 2.15.1, and the platform is :x86_64-pc-ming32/x64 (64-bit). I should also note that I am trying to run this on a work computer, but if I run the exact same code on my personal computer, R does not crash (no bug). The work computer runs windows 7 professional, and my home computer runs windows 7 home premium with R 2.14.1.
I love this hack (I too have Windows 7 Professional at R 2.15.1 at work), and it does not crash anymore, but it causes another problem after I replaced that line and used assignInNamespace; also for some reason I had to replace odbcValidChannel with RODBC:::odbcValidChannel and quoteTabNames with RODBC:::quoteTabNames
But when I used sqlSave, I got the following error:
Error in odbcUpdate(channel, query, mydata, coldata[m, ], test = test, :
no parameters, so nothing to update
I don't even use odbcUpdate anywhere in the code, and the RODBC::: sqlSave does not have the odbcUpdate call inside.
Any thoughts?
thank you,
-Alex
Here is the Pseudocode for Lempel-Ziv-Welch Compression.
pattern = get input character
while ( not end-of-file ) {
K = get input character
if ( <<pattern, K>> is NOT in
the string table ){
output the code for pattern
add <<pattern, K>> to the string table
pattern = K
}
else { pattern = <<pattern, K>> }
}
output the code for pattern
output EOF_CODE
I am trying to code this in Lua, but it is not really working. Here is the code I modeled after an LZW function in Python, but I am getting an "attempt to call a string value" error on line 8.
function compress(uncompressed)
local dict_size = 256
local dictionary = {}
w = ""
result = {}
for c in uncompressed do
-- while c is in the function compress
local wc = w + c
if dictionary[wc] == true then
w = wc
else
dictionary[w] = ""
-- Add wc to the dictionary.
dictionary[wc] = dict_size
dict_size = dict_size + 1
w = c
end
-- Output the code for w.
if w then
dictionary[w] = ""
end
end
return dictionary
end
compressed = compress('TOBEORNOTTOBEORTOBEORNOT')
print (compressed)
I would really like some help either getting my code to run, or helping me code the LZW compression in Lua. Thank you so much!
Assuming uncompressed is a string, you'll need to use something like this to iterate over it:
for i = 1, #uncompressed do
local c = string.sub(uncompressed, i, i)
-- etc
end
There's another issue on line 10; .. is used for string concatenation in Lua, so this line should be local wc = w .. c.
You may also want to read this with regard to the performance of string concatenation. Long story short, it's often more efficient to keep each element in a table and return it with table.concat().
You should also take a look here to download the source for a high-performance LZW compression algorithm in Lua...