I am having some issues accessing the timestamp data in the IBrokers package.
Here is an example of the data I get:
AAPL.Open AAPL.High AAPL.Low AAPL.Close AAPL.Volume AAPL.WAP AAPL.hasGaps AAPL.Count
2015-01-09 17:59:00 112 112.04 111.95 112 6043 112.011 0 2240
So when I run data[,0] I get
2015-01-09 17:59:00
The problem is that later on when I try to save that into a MySQL table I get the following error:
Error in dimnames(cd) <- list(as.character(index(x)), colnames(x)) :
'dimnames' applied to non-array
It looks like data[,0] does not simply contains the timestamp.
When I do a summary of the variable ts which contains data[,0] I get:
Error in `colnames<-`(`*tmp*`, value = c("ts.1", "ts.0")) :
'names' attribute [2] must be the same length as the vector [1]
Any tip on how to access the timestamp or convert the contents of ts to char so I can insert it into the DB will be appreciated.
EDIT:
dput() output
structure(c(112, 112.04, 111.95, 112, 6043, 112.011, 0, 2240), .Dim = c(1L,
8L), index = structure(1420837140, tzone = "", tclass = c("POSIXct",
"POSIXt")), .indexCLASS = c("POSIXct", "POSIXt"), tclass = c("POSIXct",
"POSIXt"), .indexTZ = "", tzone = "", .Dimnames = list(NULL,
c("AAPL.Open", "AAPL.High", "AAPL.Low", "AAPL.Close", "AAPL.Volume",
"AAPL.WAP", "AAPL.hasGaps", "AAPL.Count")), class = c("xts",
"zoo"), from = "20150112 02:52:24", to = "20150112 02:53:24", src = "IB", updated = structure(33434342.12435, class = c("POSIXct",
"POSIXt")))
As suggested by #JoshuaUlrich on the comments to my question, the answer was on the zoo package.
Full documentation on zoo can be found here
In my particular case by including the zoo library and simply doing:
time(data[,0])
I solve the Error in dimnames() error.
Hope it helps someone else.
Related
I would like to import variables defined in a json file(my_info.json) as attibutes for bazel rules.
I tried this (https://docs.bazel.build/versions/5.3.1/skylark/tutorial-sharing-variables.html) and works but do not want to use a .bzl file and import variables directly to attributes to BUILD.bazel.
I want to use those variables imported from my_info.json as attributes for other BUILD.bazel files.
projects/python_web/BUILD.bazel
load("//projects/tools/parser:config.bzl", "MY_REPO","MY_IMAGE")
container_push(
name = "publish",
format = "Docker",
registry = "registry.hub.docker.com",
repository = MY_REPO,
image = MY_IMAGE,
tag = "1",
)
Asking the similar in Bazel slack I was informed the is not possible to import variables directly to Bazel and it is needed to parse the json variables and write them into a .bzl file.
I tried also this code but nothing is written in config.bzl file.
my_info.json
{
"MYREPO" : "registry.hub.docker.com",
"MYIMAGE" : "michael/monorepo-python-web"
}
WORKSPACE.bazel
load("//projects/tools/parser:jsonparser.bzl", "load_my_json")
load_my_json(
name = "myjson"
)
projects/tools/parser/jsonparser.bzl
def _load_my_json_impl(repository_ctx):
json_data = json.decode(repository_ctx.read(repository_ctx.path(Label(":my_info.json"))))
config_lines = ["%s = %s" % (key, repr(val)) for key, val in json_data.items()]
repository_ctx.file("config.bzl", "\n".join(config_lines))
load_my_json = repository_rule(
implementation = _load_my_json_impl,
attrs = {},
)
projects/tools/parser/BUILD.bazel
load("#aspect_bazel_lib//lib:yq.bzl", "yq")
load(":config.bzl", "MYREPO", "MY_IMAGE")
yq(
name = "convert",
srcs = ["my_info2.json"],
args = ["-P"],
outs = ["bar.yaml"],
)
Executing:
% bazel build projects/tools/parser:convert
ERROR: Traceback (most recent call last):
File "/Users/michael.taquia/Documents/Personal/Projects/bazel/bazel-projects/multi-language-bazel-monorepo/projects/tools/parser/BUILD.bazel", line 2, column 22, in <toplevel>
load(":config.bzl", "MYREPO", "MY_IMAGE")
Error: file ':config.bzl' does not contain symbol 'MYREPO'
When making troubleshooting I see the execution calls the jsonparser.bzl but never enters to _load_my_json_impl function (based in print statements) and does not write anything to config.bzl.
Notes: Tested on macOS 12.6 (21G115 ) Darwin Kernel Version 21.6.0
There is a better way to do that? A code snippet will be very useful.
I have a multiline string variable which includes multiline of system log like below, and I would like to extract the JSON part only
System 123456
Logs start 2021-07-03 12:00:00
<event> {log_in_json}
<event> {log_in_json}
I using find to search over the string variable but this only allow me to get the first occurrence. Anyone could advise?
start = var.find(<event>)
end = var.find("}}")
extracted_line = var[start:end+len("}}")]
json_str = extracted_line.lstrip(<event>)
print(json_str)
Using the optional second argument to the find method, we can set the starting
point for the search. So, second and following times around, we'll start where
we previously found the last match (end), until the method returns -1:
var = '''
System 123456
Logs start 2021-07-03 12:00:00
<event> {log_in_json}
<event> {log_in_json2}
'''
start = var.find('<event>')
while start > 0:
end = var.find("}", start)
extracted_line = var[start:end+len("}")]
json_str = extracted_line.lstrip('<event> ')
print(json_str)
start = var.find('<event>', end)
# {log_in_json}
# {log_in_json2}
I have been experimenting with Plumber in R recently, and am having success when I pass the following data using a POST request;
{"Gender": "F", "State": "AZ"}
This allows me to write a function like the following to return the data.
#* #post /score
score <- function(Gender, State){
data <- list(
Gender = as.factor(Gender)
, State = as.factor(State))
return(data)
}
However, when I try to POST an array of JSON objects, I can't seem to access the data through the function
[{"Gender":"F","State":"AZ"},{"Gender":"F","State":"NY"},{"Gender":"M","State":"DC"}]
I get the following error
{
"error": [
"500 - Internal server error"
],
"message": [
"Error in is.factor(x): argument \"Gender\" is missing, with no default\n"
]
}
Does anyone have an idea of how Plumber parses JSON? I'm not sure how to access and assign the fields to vectors to score the data.
Thanks in advance
I see two possible solutions here. The first would be a command line based approach which I assume you were attempting. I tested this on a Windows OS and used column based data.frame encoding which I prefer due to shorter JSON string lengths. Make sure to escape quotation marks correctly to avoid 'argument "..." is missing, with no default' errors:
curl -H "Content-Type: application/json" --data "{\"Gender\":[\"F\",\"F\",\"M\"],\"State\":[\"AZ\",\"NY\",\"DC\"]}" http://localhost:8000/score
# [["F","F","M"],["AZ","NY","DC"]]
The second approach is R native and has the advantage of having everything in one place:
library(jsonlite)
library(httr)
## sample data
lst = list(
Gender = c("F", "F", "M")
, State = c("AZ", "NY", "DC")
)
## jsonify
jsn = lapply(
lst
, toJSON
)
## query
request = POST(
url = "http://localhost:8000/score?"
, query = jsn # values must be length 1
)
response = content(
request
, as = "text"
, encoding = "UTF-8"
)
fromJSON(
response
)
# [,1]
# [1,] "[\"F\",\"F\",\"M\"]"
# [2,] "[\"AZ\",\"NY\",\"DC\"]"
Be aware that httr::POST() expects a list of length-1 values as query input, so the array data should be jsonified beforehand. If you want to avoid the additional package imports altogether, some system(), sprintf(), etc. magic should do the trick.
Finally, here is my plumber endpoint (living in R/plumber.R and condensed a little bit):
#* #post /score
score = function(Gender, State){
lapply(
list(Gender, State)
, as.factor
)
}
and code to fire up the API:
pr = plumber::plumb("R/plumber.R")
pr$run(port = 8000)
I am working on a check_mk plugin and can't seem to get the WATO specified params passed to the check function when it runs for one check in particular...
The check param rule shows in WATO
It writes correct looking values to rules.mk
Clicking the Analyze check parameters icon from a hosts service discovery shows the rule as active.
The check parameters displayed in service discovery show the title from the WATO file so it seems like it is associating things correctly.
Running cmk -D <hostname> shows the check as always having the default values though.
I have been staring at it for awhile and am out of ideas.
Check_MK version: 1.2.8p21 Raw
Bulk of check file:
factory_settings["elasticsearch_status_default"] = {
"min": (600, 300)
}
def inventory_elasticsearch_status(info):
for line in info:
yield restore_whitespace(line[0]), {}
def check_elasticsearch_status(item, params, info):
for line in info:
name = restore_whitespace(line[0])
message = restore_whitespace(line[2])
if name == item:
return get_status_state(params["min"], name, line[1], message, line[3])
check_info['elasticsearch_status'] = {
"inventory_function" : inventory_elasticsearch_status,
"check_function" : check_elasticsearch_status,
"service_description" : "ElasticSearch Status %s",
"default_levels_variable" : "elasticsearch_status_default",
"group" : "elasticsearch_status",
"has_perfdata" : False
}
Wato File:
group = "checkparams"
#subgroup_applications = _("Applications, Processes & Services")
register_check_parameters(
subgroup_applications,
"elasticsearch_status",
_("Elastic Search Status"),
Dictionary(
elements = [
( "min",
Tuple(
title = _("Minimum required status age"),
elements = [
Age(title = _("Warning if below"), default_value = 600),
Age(title = _("Critical if below"), default_value = 300),
]
))
]
),
None,
match_type = "dict",
)
Entry in rules.mk from WATO rule:
checkgroup_parameters.setdefault('elasticsearch_status', [])
checkgroup_parameters['elasticsearch_status'] = [
( {'min': (3600, 1800)}, [], ALL_HOSTS ),
] + checkgroup_parameters['elasticsearch_status']
Let me know if any other information would be helpful!
EDIT: pls help
Posted question here as well and the mystery got solved.
I was matching the WATO rule to item None (5th positional arg in the WATO file), but since this check had multiple items inventoried under it (none of which had the id None) the rule was applying to the host, but not to any of the specific service checks.
Fix was to replace that param with:
TextAscii( title = _("Status Description"), allow_empty = True),
I am running an R script that adds data to mySQL database. I usually format the data and add it as as it comes in after couple of hours (data string is not continuous). My first set of data was added properly in MySQL database. The second string of data can not be added properly.
con = dbConnect(MySQL(), user='root', password='xxxxxx', dbname='test', host='localhost')
dbWriteTable(con, 'Tables', value = parseTweets(filterStream(file.name= "", track=c("lebron"), timeout=10, oauth=my_oauth)))
When I rerun the last code (dbWriteTable) again, it gives me following error
Error: Error in .local(conn, statement, ...) :
could not run statement: Table 'tables' already exists
I also used
dbWriteTable(con, 'Tables', value = parseTweets(filterStream(file.name= "", track=c("lebron"), timeout=10, oauth=my_oauth)), append = TRUE)
but it provides the same error
For some reason giving setting append to TRUE is not working. Instead give it a number. See the code below for better understanding
dbWriteTable(con, 'Tables', value = parseTweets(filterStream(file.name= "", track=c("lebron"), timeout=100, oauth=my_oauth)), overwrite = 0, row.names = 0, append = 1)