In the Octave script below I am looping through files in a directory, loading them in to Octave to do some manipulation on data, and then attempting to write the manipulated data ( a matrix ) to a new file whose name is derived from the name of the input file. The manipulated data is assigned to a variable name that has the same name as the file that it is to be saved in. All unwanted variables are cleared and the save command should save/write the single, assigned variable matrix to the file "new_filename."
However, this last save/write command is not happening, and I don't understand why not. Without specific variable commands, the save function should save all variables in scope, in this case there only being the one matrix to save. Why is this not working?
clear all ;
all_raw_OHLC_files = glob( "*_raw_OHLC_daily" ) ; % cell with filenames matching *_raw_OHLC_daily
for ii = 1 : length( all_raw_OHLC_files ) % loop for length of above cell
filename = all_raw_OHLC_files{ii} ; % get files' names
% create a new filename for the output file
split_filename = strsplit( filename , "_" ) ;
new_filename = tolower( [ split_filename{1} "_" split_filename{2} "_ohlc_daily" ] ) ;
% open and read file
fid = fopen( filename , 'rt' ) ;
data = textscan( fid , '%s %f %f %f %f %f %s' , 'Delimiter' , ',' , 'CollectOutput', 1 ) ;
fclose( fid ) ;
ex_data = [ datenum( data{1} , 'yyyy-mm-dd HH:MM:SS' ) data{2} ] ; % extract the file's data
% process the raw data in to OHLC bars
weekday_ix = weekday( ex_data( : , 1 ) ) ;
% find Mondays immediately preceeded by Sundays in the data
monday_ix = find( ( weekday_ix == 2 ) .* ( shift( weekday_ix , 1 ) == 1 ) ) ;
sunday_ix = monday_ix .- 1 ;
% replace Monday open with the Sunday open
ex_data( monday_ix , 2 ) = ex_data( sunday_ix , 2 ) ;
% replace Monday high with max of Sunday high and Monday high
ex_data( monday_ix , 3 ) = max( ex_data( sunday_ix , 3 ) , ex_data( monday_ix , 3 ) ) ;
% repeat for min of lows
ex_data( monday_ix , 4 ) = min( ex_data( sunday_ix , 4 ) , ex_data( monday_ix , 4 ) ) ;
% combines volume figures
ex_data( monday_ix , 6 ) = ex_data( sunday_ix , 6 ) .+ ex_data( monday_ix , 6 ) ;
% now delete the sunday data
ex_data( sunday_ix , : ) = [] ;
assignin( "base" , tolower( [ split_filename{1} "_" split_filename{2} "_ohlc_daily" ] ) , ex_data )
clear ans weekday_ix sunday_ix monday_ix ii filename split_filename fid ex_data data all_raw_OHLC_files
% print to file
save new_filename
endfor
save new_filename saves the current workspace to a file with the filename "new_filename". I guess what you want is to create a file with a filename that is stored in "new_filename":
save (new_filename);
Your current approach of "clearing all I don't need and then store the whole workspace" is IMHO very ugly and you should instead explicitly store ex_data if this is the only part wou want:
save (new_filename, "ex_data");
Related
I am uploading an excel data sheet. In the sheet I have a numeric column which I want to convert to date. So 40955 should look like 04.09.1955 (DDMMYYYY)
Can someone help me out here. I tried using Data Conversion transformation component and its showing me error.
PP
Main obstacle here is that your values are not in an easy to use format.
To do what you specify it needs to break up the value into its parts, concatenate again and then convert. All this can be done in a single statement. For explanation I show the steps below.
DECLARE
#someval int = 40955,
#dateval int,
#dated date
;
SELECT
-- single extraction steps
#someval % 100 AS yearval,
( #someval / 100 ) % 100 AS monthval,
( #someval / 10000 ) AS dayval
;
SELECT
--#dateval =
-- extract year and push it to front
( #someval % 100 ) * 10000
-- extract month and push into middle
+ ( #someval / 100 ) % 100 * 100
-- extract day and keep at end
+ ( #someval / 10000 )
;
SELECT
-- clip all elements into single integer
#dateval =
( #someval % 100 ) * 10000
+ ( #someval / 100 ) % 100 * 100
+ ( #someval / 10000 )
;
SELECT
-- 112 = yyyymmdd format
#dated = CONVERT( date, CAST( #dateval AS varchar(8) ), 112 )
;
SELECT
-- show as standard (format 120) date aka ISO 8601 readable
#dated AS Dated
;
However I suspect that the value you receive from Excel is kind of Julian date. In this case the following answer will provide a solution:
convert Excel Date Serial Number to Regular Date
Keep in mind that in SSIS you need to wrap this coding into either a column or a transformation.
I have a code in Lua. In the first function, I get a JSON data and put it in a variable (item1), and am able to print it. In the second function, I would like to use this variable to show the image (because item1 is an image URL). I tried a forward declaration like this and put it in the second function, but it does not work. How can this be solved?
local item1
local function networkListener( event )
local res = json.prettify( event.response )
local decoded = json.decode( res )
if ( event.isError ) then
print( "--Network error-- ", ( res ) )
else
print( "Results: " .. ( res ) )
item1 = decoded.results.bindings[0].image.value
print(item1)
local myText = display.newText(sceneGroup, item1, 10, 100, native.systemFont, 26 )
myText:setFillColor( 1, 1, 1 )
end
end
params.body = body
network.request("http://example.com/data.json", "GET", networkListener, params)
local function networkListener2( event )
if ( event.isError ) then
print ( "Network error - download failed" )
else
event.target.alpha = 0
transition.to( event.target, { alpha = 1.0 } )
end
print ( "event.response.fullPath: ", event.response.fullPath )
print ( "event.response.filename: ", event.response.filename )
print ( "event.response.baseDirectory: ", event.response.baseDirectory )
end
display.loadRemoteImage(item1, "GET", networkListener2, "item1.png", system.TemporaryDirectory, 50, 50 )
Thank you very much for your help in advance!
In your code the display.loadRemoteImage() is called before previous network.request() has finished its job. Callback networkListener wasn't triggered yet, so item1 variable is not assigned.
You should schedule loadRemoteImage() from within within networkListener, or anywhere else where you will know that the url was successfully read, i.e. previous request has finished.
local function networkListener2( event )
if ( event.isError ) then
print ( "Network error - download failed" )
else
event.target.alpha = 0
transition.to( event.target, { alpha = 1.0 } )
end
print ( "event.response.fullPath: ", event.response.fullPath )
print ( "event.response.filename: ", event.response.filename )
print ( "event.response.baseDirectory: ", event.response.baseDirectory )
end
local function networkListener( event )
local res = json.prettify( event.response )
local decoded = json.decode( res )
if ( event.isError ) then
print( "--Network error-- ", ( res ) )
else
print( "Results: " .. ( res ) )
item1 = decoded.results.bindings[0].image.value
print(item1)
local myText = display.newText(sceneGroup, item1, 10, 100, native.systemFont, 26 )
myText:setFillColor( 1, 1, 1 )
display.loadRemoteImage(item1, "GET", networkListener2, "item1.png", system.TemporaryDirectory, 50, 50 )
-- Position should be set by the center of the text object
local myText = display.newText(sceneGroup, item1, 10, 300, native.systemFont, 26 )
myText:setFillColor( 1, 1, 1 )
local myText2 = display.newText(sceneGroup, item2, 10, 500, native.systemFont, 26 )
myText:setFillColor( 1, 1, 1 )
end
end
params.body = body
network.request("http://example.com/data.json", "GET", networkListener, params)
I need to read data from an ASCII file where missing values are given as NA. Using textscan(...) does not seem to work, because textscan(...) seems to stop reading/parsing at the first occurrence of NA.
Here's a simple demonstration of the issue:
x = textscan ( "1 ; 2 ; 3\n4 ; NA ; 6" , '%d %d %d' , 'Delimiter' , ';' , 'ReturnOnError' , false )
error: textscan: Read error in field 2 of row 2
I have also tried to tell textscan(...) to interpret NA as "empty value", but no luck:
x = textscan ( "1 ; 2 ; 3\n4 ; NA ; 6" , '%d %d %d' , 'Delimiter' , ';' , 'TreatAsEmpty' , 'NA' , 'ReturnOnError' , false )
error: textscan: Read error in field 2 of row 2
Can someone explain what's going on, or how to make this work?
Note that is just a simplified example to illustrate the problem. The format of the data in my files is a bit more complex, and I really depend on textscan(...) to parse it; I don't think I can easily do it without textscan(...).
(I am running Octave 4.2.1.)
NA is defined for floating point numbers so you should use '%f' conversion specifier instead of '%d'.
x = textscan ( "1 ; 2 ; 3\n4 ; NA ; 6" , '%f %f %f' ,
'Delimiter' , ';' , 'ReturnOnError' , false )
We have a key-value pair in redis consisting of a key with a JSON object as a value with various information;
"node:service:i-01fe0d69c343734" :
"{\"port\":\"32781\",
\"version\":\"3.0.2\",
\"host-instance-id\":\"i-01fe0d69c2243b366\",
\"last-checkin\":\"1492702508\",
\"addr\":\"10.0.0.0\",
\"host-instance-type\":\"m3.large\"}"
Is it possible to sort the table based on the last-checkin time of the value?
Here is my solution to your problem, using the quick sort algorithm, before doing a little correction of your input (as I understood it):
-----------------------------------------------------
local json = require("json")
function quicksort(t, sortname, start, endi)
start, endi = start or 1, endi or #t
sortname = sortname or 1
if(endi - start < 1) then return t end
local pivot = start
for i = start + 1, endi do
if t[i][sortname] <= t[pivot][sortname] then
local temp = t[pivot + 1]
t[pivot + 1] = t[pivot]
if(i == pivot + 1) then
t[pivot] = temp
else
t[pivot] = t[i]
t[i] = temp
end
pivot = pivot + 1
end
end
t = quicksort(t, sortname, start, pivot - 1)
return quicksort(t, sortname, pivot + 1, endi)
end
---------------------------------------------------------
-- I manually added delimeter ","
-- and name "node:service..." must be different
str = [[
{
"node:service:i-01fe0d69c343731" :
"{\"port\":\"32781\",
\"version\":\"3.0.2\",
\"host-instance-id\":\"i-01fe0d69c2243b366\",
\"last-checkin\":\"1492702506\",
\"addr\":\"10.0.0.0\",
\"host-instance-type\":\"m3.large\"}"
,
"node:service:i-01fe0d69c343732" :
"{\"port\":\"32781\",
\"version\":\"3.0.2\",
\"host-instance-id\":\"i-01fe0d69c2243b366\",
\"last-checkin\":\"1492702508\",
\"addr\":\"10.0.0.0\",
\"host-instance-type\":\"m3.large\"}"
,
"node:service:i-01fe0d69c343733" :
"{\"port\":\"32781\",
\"version\":\"3.0.2\",
\"host-instance-id\":\"i-01fe0d69c2243b366\",
\"last-checkin\":\"1492702507\",
\"addr\":\"10.0.0.0\",
\"host-instance-type\":\"m3.large\"}"
,
"node:service:i-01fe0d69c343734" :
"{\"port\":\"32781\",
\"version\":\"3.0.2\",
\"host-instance-id\":\"i-01fe0d69c2243b366\",
\"last-checkin\":\"1492702501\",
\"addr\":\"10.0.0.0\",
\"host-instance-type\":\"m3.large\"}"
}
]]
-- remove unnecessary \
str = str:gsub('"{','{'):gsub('}"','}'):gsub('\\"','"')
local t_res= json.decode(str)
-- prepare table before sorting
local t_indexed = {}
for k,v in pairs(t_res) do
v["node-service"] = k
t_indexed[#t_indexed+1] = v
end
-- algoritm quicksort realised only for indexed table
local t_sort= quicksort(t_indexed, "last-checkin")
for k,v in pairs(t_sort) do
print( k , v["node-service"] , v["port"], v["version"], v["host-instance-id"], v["last-checkin"] , v["addr"], v["host-instance-type"] )
end
console:
1 node:service:i-01fe0d69c343734 32781 3.0.2 i-01fe0d69c2243b366 1492702501 10.0.0.0 m3.large
2 node:service:i-01fe0d69c343731 32781 3.0.2 i-01fe0d69c2243b366 1492702506 10.0.0.0 m3.large
3 node:service:i-01fe0d69c343733 32781 3.0.2 i-01fe0d69c2243b366 1492702507 10.0.0.0 m3.large
4 node:service:i-01fe0d69c343732 32781 3.0.2 i-01fe0d69c2243b366 1492702508 10.0.0.0 m3.large
I have survey data that I am working on. I need to make some tables and regression analyses on the data.
After attaching the data, this is the code I use for tables for four variables:
ftable(var1, var2, var3, var4)
And this is the regression code that I use for the data:
logit.1 <- glm(var4 ~ var3 + var2 + var1, family = binomial(link = "logit"))
summary(logit.1)
So far so good for the unweighted analyses. But how can I do the same analyses for the weighted data? Here is some additional info:
There are four variables in the dataset that reflect the sampling structure. These are
strat: stratum (urban or (sub-county) rural).
clust: batch of interviews that were part of the same random walk
vill_neigh_code: village or neighbourhood code
sweight: weights
library(survey)
data(api)
# example data set
head( apiclus2 )
# instead of var1 - var4, use these four variables:
ftable( apiclus2[ , c( 'sch.wide' , 'comp.imp' , 'both' , 'awards' ) ] )
# move it over to x for faster typing
x <- apiclus2
# also give x a column of all ones
x$one <- 1
# run the glm() function specified.
logit.1 <-
glm(
comp.imp ~ target + cnum + growth ,
data = x ,
family = binomial( link = 'logit' )
)
summary( logit.1 )
# now create the survey object you've described
dclus <-
svydesign(
id = ~dnum + snum , # cluster variable(s)
strata = ~stype , # stratum variable
weights = ~pw , # weight variable
data = x ,
nest = TRUE
)
# weighted counts
svyby(
~one ,
~ sch.wide + comp.imp + both + awards ,
dclus ,
svytotal
)
# weighted counts formatted differently
ftable(
svyby(
~one ,
~ sch.wide + comp.imp + both + awards ,
dclus ,
svytotal ,
keep.var = FALSE
)
)
# run the svyglm() function specified.
logit.2 <-
svyglm(
comp.imp ~ target + cnum + growth ,
design = dclus ,
family = binomial( link = 'logit' )
)
summary( logit.2 )