Readtimearray function in Julia TimeSeries package - csv

I would like to read a csv file of the following form with readtimearray:
"","ES1 Index","VG1 Index","TY1 Comdty","RX1 Comdty","GC1 Comdty"
"1999-01-04",1391.12,3034.53,66.515625,86.2,441.39
"1999-01-05",1404.86,3072.41,66.3125,86.17,440.63
"1999-01-06",1435.12,3156.59,66.4375,86.32,441.7
"1999-01-07",1432.32,3106.08,66.25,86.22,447.67
"1999-01-08",1443.81,3093.46,65.859375,86.36,447.06
"1999-01-11",1427.84,3005.07,65.71875,85.74,449.5
"1999-01-12",1402.33,2968.04,65.953125,86.31,442.92
"1999-01-13",1388.88,2871.23,66.21875,86.52,439.4
"1999-01-14",1366.46,2836.72,66.546875,86.73,440.01
However, here's what I get when I evaluate readtimearray("myfile.csv")
ERROR: `convert` has no method matching convert(::Type{UTF8String}, ::Float64)
in push! at array.jl:460
in readtimearray at /home/juser/.julia/v0.3/TimeSeries/src/readwrite.jl:25
What is it that I am not seeing?

That looks like a bug in readtimearray.
Empty lines are removed but, to identify them,
the code only looks at the first column.
Since the header has an empty string in the first column, it is removed...
Changing the header of your file to
"date","ES1 Index","VG1 Index","TY1 Comdty","RX1 Comdty","GC1 Comdty"
addresses the problem.

You're using convert, which is meant for use with julia types (see doc for more info).
You parse the string using Date:
d=Date("1999-04-01","yyyy-mm-dd")
#...
array_of_dates = map(x->Date(x,"yyyy-mm-dd"),array_of_strings)

Related

Using property OR in "conditions" parameter of askargs action with Semantic MediaWiki API

I'm trying to fetch results via API using the module askargs. I have no problems getting results when I have just one condition or more conditions aggregated with the operator AND where I make use of the pipe character to separate them (like written in documentation).
E.g.
[[Category:+]] AND [[Jurisdiction::A]] AND [[Type::B]]
Category:+ | Jurisdiction::A | Type::B
But the pipe character doesn't work with OR.
I need to be able to use both logical conditions with several arguments within the same query.
Am I missing something?
Am I missing something?
No. The API doesn't handle OR condition, due to simplistic code in the query parameters formatter.
See file SemanticMediaWiki/src/MediaWiki/Api/ApiRequestParameterFormatter.php
at line 132:
protected function formatConditions( $condition ) {
return "[[$condition]]";
}
Every condition in the query is formatted with surrounding brackets, leading OR to be interpreted as a page title.
An alternative is to use Special:Ask with URL encoded query and json format:
https://www.semantic-mediawiki.org/wiki/Special:Ask/-5B-5BHas-20keyword::askargs-5D-5DOR-5B-5BHas-20keyword::ask-5D-5D/-3F%3Dhelp-20page/-3FHas-20description%3Ddescription/format%3Djson
Since I came here from a website search i'm going to add another neat possibility:
If you use the Alternative separator you can use a double pipe as logical OR conjunction.
Example:
%1FCategory:+%1FJurisdiction::A%1FType::B||C
Which should be read as following
Category:+ AND Jurisdiction::A AND (Type::B OR Type::C)

Accesing Json data after 'loading' it

With a lot of help from people in this site, I managed to get some Json data from an amazon page. The data, for example, looks like this.
https://jsoneditoronline.org/?id=9ea92643044f4ac88bcc3e76d98425fc
First I have a list of strings which is converted to a string.
script = response.xpath('//script/text()').extract()
#For example, I need the variationValues data
variationValues = re.findall(r'variationValues\" : ({.*?})', ' '.join(script))[0]
Then, in my code, I have this (not a great name, will be changed later)
variationValuesJson = json.loads(variationValues)
variationValuesJson is in fact a dictionary, so doing something like this
variationValues["size_name"][3]
Should return "5.5 M US"
My issue is that, when running the program, I get the string indices must be integers error. Anyone knows whats wrong?
Note: I have tried using 'size_name' instead of "size_name", same error
variationValues["size_name"][3] #this is the raw string which you have converted to variationValuesjson
I think this is not what you actually want.
Your code should be this.
variationValuesJson['size_name'][3] #use variationValuesjson ;)

Klaxon's JSON pretty printing outputs "["result"]"

val time = json.lookup<String?>("query.results.channel.title").toJsonString(true)
outputs
["Yahoo! Weather - Nome,AK,US"]
is there a way to get the output without the brackets and the quotation marks ?
I guess that
.replace("[\"","").replace("\"]","")
isn't the best way
The brackets are contained in the default implementation (see https://github.com/cbeust/klaxon/blob/master/src/main/kotlin/com/beust/klaxon/DSL.kt at the very bottom function appendJsonStringImpl)
So it is not possible, to remove them by configuration.
It might work if you write an extension function for this particular class, but i guess this is not what you want.
So this is currently not possible without writing your own extension(-function).

using variable in argument - JSON target

I would like to use a variable (string) as part of my JSON target. Instead of simply coding for each section, like this:
$.each(data.portfolioitems.section1, function (k,v){...}
$.each(data.portfolioitems.section2, function (k,v){...}
$.each(data.portfolioitems.section3, function (k,v){...}
I would like to have a variable "varsection" that indicates which section should be called, like this:
$.each(data.portfolioitems.varsection, function (k,v){...}
As this exists, it seems that I am attempting to target the section "varsection", which of course doesn't exist.
I have found other topics where it was discussed how to use a variable as part of a JSON target, but it seems that none of the solutions I found are acceptable for this scenario where the target is an argument.
data.portfolioitems[varsection]

Mathematica - Import CSV and process columns?

I have a CSV file that is formatted like:
0.0023709,8.5752e-007,4.847e-008
and I would like to import it into Mathematica and then have each column separated into a list so I can do some math on the selected column.
I know I can import the data with:
Import["data.csv"]
then I can separate the columns with this:
StringSplit[data[[1, 1]], ","]
which gives:
{"0.0023709", "8.5752e-007", "4.847e-008"}
The problem now is that I don't know how to get the data into individual lists and also Mathematica does not accept scientific notation in the form 8.5e-007.
Any help in how to break the data into columns and format the scientific notation would be great.
Thanks in advance.
KennyTM is correct.
data = Import["data.csv", "CSV"];
column1 = data[[All,1]]
column2 = data[[All,2]]
...
Davorak's answer is the correct one if you need to import a whole CSV file as an array. However, if you have a single string that you need to convert from the C/Fortran-style exponential notation, you can use ImportString with different arguments for the format. As an example, there's
In[1]:= ImportString["1.0e6", "List"]
Out[1]= {1.*^6}
The *^ operator is Mathematica's equivalent of the e. Note this is also a good way to split apart strings that are in CSV form:
In[2]:= ImportString["1.0e6,3.2,foo", "CSV"]
Out[2]= {{1.*10^6,3.2,foo}}
In both cases, you'll get your answer wrapped up in an extra level of list structure, which is pretty easy to deal with. However, if you're really sure you only have or want a single number, you can turn the string into a stream and use Read. It's cumbersome enough that I'd stick to ImportString, however:
In[3]:= Module[{stream = StringToStream["1.0e6"], number},
number = Read[stream, "Number"];
Close[stream];
number]
Out[3]= 1.*10^6
You can fix the notation by using StringReplace[].
In[1]: aa = {"0.0023709", "8.5752e-007", "4.847e-008"};
In[2]: ToExpression[
StringReplace[
#,
RegularExpression#"(^\d+\.\d+)e([+-]\d+)" -> "$1*10^$2"
]
] & # aa
Out[2]: {0.0023709, 8.5752*10^-7, 4.847*10^-8}
You can put the entire data array in place of aa to process is all at once with a one liner
{col1,col2,col3} = ToExpression[...] & # Transpose[Import["data.csv", "CSV"]];
with ToExpression[...] as above.
In MMA7, I use the "elements" argument. In fact, I can't Import even a .csv file without specifying the element:
aa=Import["data.csv","Data"]
When you do this, all strings are automatically converted to expressions: Head/#Flatten#aa is {Real, Real, ....}. Also, "8.5752e-007" becomes 8.5752*10^7, a legal MMA expression.
The result of the Import is a 1xn list {{ ... }}.
So, Transpose#aa gives the nx1 list {{.},{.}, .... }.
I think this is the format you wanted.