Keep R code running despite Errors? - warnings

Using this code function from a previous stackoverflow:R: How to GeoCode a simple address using Data Science Toolbox
require("RDSTK")
library(httr)
library(rjson)
geo.dsk <- function(addr){ # single address geocode with data sciences toolkit
require(httr)
require(rjson)
url <- "http://www.datasciencetoolkit.org/maps/api/geocode/json"
response <- GET(url,query=list(sensor="FALSE",address=addr))
json <- fromJSON(content(response,type="text"))
loc <- json['results'][[1]][[1]]$geometry$location
return(c(address=addr,long=loc$lng, lat= loc$lat))
}
Now Example Code. This works fine:
City<-c("Atlanta, USA", "Baltimore, USA", "Beijing, China")
r<- do.call(rbind,lapply(as.character(City),geo.dsk))
This does not work.It says: "Error in json["results"][[1]][[1]] : subscript out of bounds"
Citzy<-c("Leicester, United Kingdom")
do.call(rbind,lapply(as.character(Citzy),geo.dsk))
I believe the error is because it cannot find the city. So I would like the code to just ignore it and keep running. How would I go about doing this? Any help would be greatly appreciated!

Handling errors is best done with a try/catch block. In R, that would look something like this (source):
result = tryCatch({
# write your intended code here
Citzy<-c("Leicester, United Kingdom")
do.call(rbind,lapply(as.character(Citzy),geo.dsk))
}, warning = function(w) {
# log the warning or take other action here
}, error = function(e) {
# log the error or take other action here
}, finally = {
# this will execute no matter what else happened
})
So if you encounter the error, it will enter the error block (and skip the rest of the code in the "try" section) rather than stopping your program. Note that you should always "do something" with the error rather than ignoring it completely; logging a message to the console and/or setting an error flag are good things to do.

Related

How to pass values of alert query ending with double quotes to ARM template parameter file

I Am using Azure pipelines to automate the Log query based alerts. I am passing runtime parameter values to Azure variable first and then replacing the parameter.json file by the query by using the replacing token task in the pipeline. When I am passing the queries which are not ending with double quotes", the ResourceGroup deployment task getting succeeded. But when I am passing a query which is ending already with Double quotes getting failed.
Eg:
This is my base query.
"ApiManagementGatewayLogs
| where ApiId == ""my-api""
| where ResponseCode == 429
| where _SubscriptionId==""xxxxxxxxxxxxxxxxxxxxxxx"""
since my runtime parameter is type of "string", passing this as single line as below
ApiManagementGatewayLogs| where ApiId == ""my-api""| where ResponseCode == 429| where _SubscriptionId==""xxxxxxxxxxxxxxxxxxxxxxx""
But the deployments getting failed with below error
Template deployment validation was completed successfully.
Starting Deployment.
Deployment name is digitalAlerts
There were errors in your deployment. Error code: DeploymentFailed.
##[error]At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.
##[error]Details:
##[error]BadRequest: {
"error": {
"message": "The request had some invalid properties",
"code": "BadArgumentError",
"correlationId": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"innererror": {
"code": "SyntaxError",
"message": "Request is invalid and cannot be processed: Syntax error: SYN0001: I could not parse that, sorry. [line:position=0:0]. Query: 'let ['ApiManagementGatewayLogs'] = view () { datatable(['TenantId']:string,['TimeGenerated']:datetime,['OperationName']:string,['CorrelationId']:string,['Region']:string,['IsRequestSuccess']:bool,['Category']:string,['TotalTime']:long,['CallerIpAddress']:string,['Method']:string,['Url']:string,['ClientProtocol']:string,['ResponseCode']:int,['BackendMethod']:string,['BackendUrl']:string,['BackendResponseCode']:int,['BackendProtocol']:string,['RequestSize']:int,['ResponseSize']:int,['Cache']:string,['CacheTime']:long,['BackendTime']:long,['ClientTime']:long,['ApiId']:string,['OperationId']:string,['ProductId']:string,['UserId']:string,['ApimSubscriptionId']:string,['BackendId']:string,['LastErrorElapsed']:long,['LastErrorSource']:string,['LastErrorScope']:string,['LastErrorSection']:string,['LastErrorReason']:string,['LastErrorMessage']:string,['ApiRevision']:string,['ClientTlsVersion']:string,['RequestHeaders']:dynamic,['ResponseHeaders']:dynamic,['BackendRequestHeaders']:dynamic,['BackendResponseHeaders']:dynamic,['RequestBody']:string,['ResponseBody']:string,['BackendRequestBody']:string,['BackendResponseBody']:string,['Errors']:dynamic,['TraceRecords']:dynamic,['SourceSystem']:string,['Type']:string,['_ResourceId']:string,['_SubscriptionId']:string)[] };restrict access to (*);\r\nApiManagementGatewayLogs\\n| where ApiId == \\\"my-api\\\"\\n| where ResponseCode == 429\\n| where _SubscriptionId==\\\"xxxxxxxxxxxxxxxxxxxxxxxx\\\"\\n\\n'"
}
}
}
##[error]Check out the troubleshooting guide to see if your issue is addressed: https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-resource-group-deployment?view=azure-devops#troubleshooting
##[error]Task failed while creating or updating the template deployment.
Looking for 2 solutions:
From Pipeline side How Can I pass the queries in the same format as my Base query where users can easily pass their queries , which will replace my parameters.json in proper format
Secondly, how can avoid the above issue by passing the queries with double quotes ending.
Note already tried by modifying the query by replacing the " with / but didn't resolve the issue
It's not completely clear from the question but it sounds like you're deploying something like a scheduled query rule using an ARM template via an Azure DevOps pipeline.
As the ARM template is a json document any strings will need to be wrapped in double quotes. However the log query you are passing into is written in Kusto and as per the docs strings in Kusto queries can be wrapped in single or double quotes.
If you rewrite your query as:
ApiManagementGatewayLogs | where ApiId == 'my-api'| where ResponseCode == 429 | where _SubscriptionId=='xxxxxxxxxxxxxxxxxxxxxxx'
it should succeed. In your parameters.json file this would look something like:
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"logQuery": {
"value": "ApiManagementGatewayLogs | where ApiId == 'my-api'| where ResponseCode == 429 | where _SubscriptionId=='xxxxxxxxxxxxxxxxxxxxxxx'"
}
}
}

How to get JSON output with Coffeescript?

I have integrated Hubot with elasticsearch and slack.
When we are querying on the API, we are getting the output in JSON format when we use postman.
When we are querying from slack with coffeescript,we are getting plain output.
Here is the code
showHealth = (msg) ->
msg.send("Getting the health for the cluster: ")
msg.http("http://show-acc.com/_cluster/health/")
.get() (err, res, body) ->
lines = body.split("\n")
header = lines.shift()
list = [header].concat(lines.sort().reverse()).join("\n")
msg.send("/code \n ```#{list}```")
This is printing me plain output in slack.
Could you please anyone help me how to change the code to print the output as JSON format?
I believe you need to specify "mrkdwn": true, which will allow you to use backticks for code blocks. However, IMO the nicest way to achieve formatted messages is using Attachments, the structure of which is an array with hashed properties...
I've also got more mileage using robot.messageRoom rather than msg.send, something like this:
# Create attachment
msg = {
attachments: [
{
fallback: 'Getting the health for the cluster: http://show-acc.com/_cluster/health/'
title: 'Getting the health for the cluster:'
title_link: 'http://show-acc.com/_cluster/health/'
text: '/code \n ```#{list}```'
mrkdwn_in: ['text']
}
]
}
# Assign channel
channel = process.env.NOTIFY_ROOM
# Send it!
robot.messageRoom channel, msg
See the following refs for further info:
https://api.slack.com/docs/message-formatting#message_formatting
https://api.slack.com/docs/message-attachments

How to extract the results of Http Requests in Elm

Using Elm's html package it is possible to make http requests:
https://api.github.com/users/nytimes/repos
These are all the New York Times repos on Github. Basically there are two items I'd want from the Github response, the id and name
[ { "id": 5803599, "name": "backbone.stickit" , ... },
{ "id": 21172032, "name": "collectd-rabbitmq" , ... },
{ "id": 698445, "name": "document-viewer" , ... }, ... ]
The Elm type for Http.get requires a Json Decoder object
> Http.get
<function> : Json.Decode.Decoder a -> String -> Task.Task Http.Error a
I don't know how to open lists yet. So I put the decoder Json.Decode.string and at least the types matched, but I had no idea what to do with the task object.
> tsk = Http.get (Json.Decode.list Json.Decode.string) url
{ tag = "AndThen", task = { tag = "Catch", task = { tag = "Async", asyncFunction = <function> }, callback = <function> }, callback = <function> }
: Task.Task Http.Error (List String)
> Task.toResult tsk
{ tag = "Catch", task = { tag = "AndThen", task = { tag = "AndThen", task = { tag = "Catch", task = { tag = "Async", asyncFunction = <function> }, callback = <function> }, callback = <function> }, callback = <function> }, callback = <function> }
: Task.Task a (Result.Result Http.Error (List String))
I just want an Elm object of the repo names so I can display in some div elements, but I can't even get the data out.
Can someone slowly walk me through how to write the decoder and how to get the data out with Elm?
Update for Elm 0.17:
I have updated the complete gist of this answer to work with Elm 0.17. You can see the full source code here. It will run on http://elm-lang.org/try.
A number of language and API changes were made in 0.17 that make some of the following recommendations obsolete. You can read about the 0.17 upgrade plan here.
I will leave the original answer for 0.16 untouched below, but you can compare the final gists to see a list of what has changed. I believe the newer 0.17 version is cleaner and easier to understand.
Original Answer for Elm 0.16:
It looks like you're using the Elm REPL. As noted here, you're not going to be able to execute tasks in the REPL. We'll get to more on why in a bit. Instead, let's create an actual Elm project.
I'm assuming you've downloaded the standard Elm tools.
You'll first need to create a project folder and open it up in a terminal.
A common way to get started on an Elm project is to use the StartApp. Let's use that as a starting point. You first need to use the Elm package manager command line tool to install the required packages. Run the following in a terminal in your project root:
elm package install -y evancz/elm-html
elm package install -y evancz/elm-effects
elm package install -y evancz/elm-http
elm package install -y evancz/start-app
Now, create a file at the project root called Main.elm. Here is some boilerplate StartApp code to get you started. I won't go into explaining the details here since this question is specifically about Tasks. You can learn more by going through the Elm Architecture Tutorial. For now, copy this into Main.elm.
import Html exposing (..)
import Html.Events exposing (..)
import Html.Attributes exposing (..)
import Html.Attributes exposing (..)
import Http
import StartApp
import Task exposing (Task)
import Effects exposing (Effects, Never)
import Json.Decode as Json exposing ((:=))
type Action
= NoOp
type alias Model =
{ message : String }
app = StartApp.start
{ init = init
, update = update
, view = view
, inputs = [ ]
}
main = app.html
port tasks : Signal (Task.Task Effects.Never ())
port tasks = app.tasks
init =
({ message = "Hello, Elm!" }, Effects.none)
update action model =
case action of
NoOp ->
(model, Effects.none)
view : Signal.Address Action -> Model -> Html
view address model =
div []
[ div [] [ text model.message ]
]
You can now run this code using elm-reactor. Go to the terminal in your project folder and enter
elm reactor
This will run a web server on port 8000 by default, and you can pull up http://localhost:8000 in your browser, then navigate to Main.elm to see the "Hello, Elm" example.
The end goal here is to create a button which, when clicked, pulls in the list of nytimes repositories and lists the IDs and names of each. Let's first create that button. We'll do so by using the standard html generation functions. Update the view function with something like this:
view address model =
div []
[ div [] [ text model.message ]
, button [] [ text "Click to load nytimes repositories" ]
]
On its own, the button click does nothing. We need to create an Action that is then handled by the update function. The action the button is initiating is to fetch data from the Github endpoint. Action now becomes:
type Action
= NoOp
| FetchData
And we can now stub out the handling of this action in the update function like so. For now, let's change the message to show that the button click was handled:
update action model =
case action of
NoOp ->
(model, Effects.none)
FetchData ->
({ model | message = "Initiating data fetch!" }, Effects.none)
Lastly, we have to cause button clicks to trigger that new action. This is done using the onClick function, which generates a click event handler for that button. The button html generation line now looks like this:
button [ onClick address FetchData ] [ text "Click to load nytimes repositories" ]
Great! Now the message should be updated when you click it. Let's move onto Tasks.
As I mentioned earlier, the REPL does not (yet) support the invoking of tasks. This may seem counterintuitive if you're coming from an imperative language like Javascript, where when you write code that says "go fetch data from this url," it immediately creates an HTTP request. In a purely functional language like Elm, you do things a little differently. When you create a Task in Elm, you're really just indicating your intentions, creating a sort of "package" that you can hand off to the runtime in order to do something that causes side effects; in this case, contact the outside world and pull data down from a URL.
Let's go ahead and create a task that fetches the data from the url. First, we're going to need a type inside Elm to represent the shape of the data we care about. You indicated that you just wanted the id and name fields.
type alias RepoInfo =
{ id : Int
, name : String
}
As a note about type construction inside Elm, let's stop for a minute and talk about how we create RepoInfo instances. Since there are two fields, you can construct a RepoInfo in one of two ways. The following two statements are equivalent:
-- This creates a record using record syntax construction
{ id = 123, name = "example" }
-- This creates an equivalent record using RepoInfo as a constructor with two args
RepoInfo 123 "example"
That second was of constructing the instance will become more important when we talk about Json decoding.
Let's also add a list of these to the model. We'll have to change the init function as well to start off with an empty list.
type alias Model =
{ message : String
, repos : List RepoInfo
}
init =
let
model =
{ message = "Hello, Elm!"
, repos = []
}
in
(model, Effects.none)
Since the data from the URL comes back in JSON format, We'll need a Json Decoder to translate the raw JSON into our type-safe Elm class. Create the following decoder.
repoInfoDecoder : Json.Decoder RepoInfo
repoInfoDecoder =
Json.object2
RepoInfo
("id" := Json.int)
("name" := Json.string)
Let's pick that apart. A decoder is what maps the raw JSON to the shape of the type to which we're mapping. In this case, our type is a simple record alias with two fields. Remember that I mentioned a few steps ago that we can create a RepoInfo instance by using RepoInfo as a function that takes two parameters? That's why we're using Json.object2 to create the decoder. The first arg to object is a function that takes two arguments itself, and that's why we're passing in RepoInfo. It is equivalent to a function with arity two.
The remaining arguments spell out the shape of the type. Since our RepoInfo model lists id first and name second, that's the order in which the decoder expects the arguments to be.
We'll need another decoder to decode a list of RepoInfo instances.
repoInfoListDecoder : Json.Decoder (List RepoInfo)
repoInfoListDecoder =
Json.list repoInfoDecoder
Now that we have a model and decoder, we can create a function that returns the task for fetching the data. Remember, this isn't actually fetching any data, it's merely creating a function which we can hand off to the runtime later.
fetchData : Task Http.Error (List RepoInfo)
fetchData =
Http.get repoInfoListDecoder "https://api.github.com/users/nytimes/repos"
There are a number of ways of handling the variety of errors that can occur. Let's choose Task.toResult, which maps the result of the request to a Result type. It will make things easier on us in a bit, and is sufficient for this example. Let's change that fetchData signature to this:
fetchData : Task x (Result Http.Error (List RepoInfo))
fetchData =
Http.get repoInfoListDecoder "https://api.github.com/users/nytimes/repos"
|> Task.toResult
Note that I'm using x in my type annotation for the error value of Task. That's just because, by mapping to a Result, I'll never have to care about an error from the task.
Now, we're going to need some actions to handle the two possible results: An HTTP error or a successful result. Update Action with this:
type Action
= NoOp
| FetchData
| ErrorOccurred String
| DataFetched (List RepoInfo)
Your update function should now set those values on the model.
update action model =
case action of
NoOp ->
(model, Effects.none)
FetchData ->
({ model | message = "Initiating data fetch!" }, Effects.none)
ErrorOccurred errorMessage ->
({ model | message = "Oops! An error occurred: " ++ errorMessage }, Effects.none)
DataFetched repos ->
({ model | repos = repos, message = "The data has been fetched!" }, Effects.none)
Now, we need a way to map the Result task to one of these new actions. Since I don't want to get bogged down in error handling, I'm just going to use toString to change the error object into a string for debugging purposes
httpResultToAction : Result Http.Error (List RepoInfo) -> Action
httpResultToAction result =
case result of
Ok repos ->
DataFetched repos
Err err ->
ErrorOccurred (toString err)
That gives us a way to map a never-failing task to an Action. However, StartApp deals with Effects, which is a thin layer over Tasks (as well as a few other things). We'll need one more piece before we can tie it all together, and that's a way to map the never-failing HTTP task to an Effects of our type Action.
fetchDataAsEffects : Effects Action
fetchDataAsEffects =
fetchData
|> Task.map httpResultToAction
|> Effects.task
You may have noticed I called this thing, "never failing." That was confusing to me at first so let me try to explain. When we create a task, we're guaranteed a result, but it a success or failure. In order to make Elm apps as robust as possible, we in essence remove the possibility of failure (by which I mainly mean, an unhandled Javascript exception), by explicitly handling every case. That's why we've gone through the trouble of mapping first to a Result and then to our Action, which explicitly handles error messages. To say it never fails is not to say that HTTP problems can't happen, it's to say that we're handling every possible outcome, and errors are mapped to "successes" by mapping them to a valid action.
Before our final step, let's make sure our view can show the list of repositories.
view : Signal.Address Action -> Model -> Html
view address model =
let
showRepo repo =
li []
[ text ("Repository ID: " ++ (toString repo.id) ++ "; ")
, text ("Repository Name: " ++ repo.name)
]
in
div []
[ div [] [ text model.message ]
, button [ onClick address FetchData ] [ text "Click to load nytimes repositories" ]
, ul [] (List.map showRepo model.repos)
]
Lastly, the piece that ties this all together is to make the FetchData case of our update function return the Effect which initiates our task. Update the case statement like this:
FetchData ->
({ model | message = "Initiating data fetch!" }, fetchDataAsEffects)
That's it! You can now run elm reactor and click the button to fetch the list of repositories. If you want to test out the error handling, you can just mangle the URL for the Http.get request to see what happens.
I've posted the entire working example of this as a gist. If you don't want to run it locally, you can see the final result by pasting that code into http://elm-lang.org/try.
I've tried to be very explicit and concise about each step along the way. In a typical Elm app, a lot of these steps will be condensed down to a few lines, and more idiomatic shorthand will be used. I've tried to spare you those hurdles by making things as small and explicit as possible. I hope this helps!

How do you pass a JSON object to Enyo through a web service?

I’m trying to pass a JSON object to Enyo using a web server.
The file being loaded from the service in Enyo:
{ "Comments" : ["NewComment 1", "NewComment 2", "NewComment 3" ]}
The following callback for the service generates an error saying
gotComments: function(inSender, inResponse) {
this.serverReply = InResponse; // error uncaught reference error: inResponse not defined
this.$.list.render();
},
When I click on inReply on my chrome debugger it says
Object:
Comments: Array[3]
How can it say it is not define, if the watch window shows it as
Object:
Comments: Array[3]
The code in your question mixes InResponse (capital I) and inResponse (lowercase i). Assuming this is what your real code looks like, change
this.serverReply = InResponse;
to
this.serverReply = inResponse;

Django dump JSON data

I'm trying to enter a word and have it show up on page through ajax. There something simple I'm missing...
So I'm sending the info like this with Jquery:
$.ajax({
url: url,
type:"POST",
data:{'word': word},
success: function(data){
//do something
}
});
and the information is getting into the view and saving into the DB. The problem happens when I try to return the new word:
def add_word(request, lecture_id):
l = get_object_or_404(Lecture, pk=lecture_id)
if request.method == "POST":
#see if there is a value with p
if request.POST.has_key('word') and request.POST['word'] != "":
success = {}
try:
oldWord = l.post_set.get(word=request.POST['word'])
except:
newWord = l.post_set.create(word=request.POST['word'], count = 1)
success = {'new': str(newWord.word), 'count': str(newWord.count)}
else:
oldWord.count += 1
oldWord.save()
success = {'old': str(oldWord.word), 'count': str(oldWord.count)}
return HttpResponse(json.dumps(success), mimetype="application/javascript")
return HttpResponse(reverse('post.views.lecture_display', args=(l.id,)))
Im getting a 500 error...
[13/Oct/2011 15:14:48] "POST /lecture/3/add HTTP/1.1" 500 66975
Without seeing the traceback, my guess is that what's failing is [one of]:
# A) This path is not resolving correctly (see named-URLs in Django's docs)
reverse('post.views.lecture_display', args=(l.id,))
# B) This word has unicode data, which can't simply be passed to ``str``
str(oldWord.word)
Open the URL directly in your browser, and you'll get the default Django traceback, 500 view.
I think you need to learn debugging rather that a particular fix.
Try opening that url without post data, see if there's a syntax or a name error.
If the problem persists, use ipdb or pudb package, insert the following line in the view and analyze what happens inside your code:
def myview(request, id):
import ipdb; ipdb.set_trace()
Use Chrome Developer Tools or Firebug to see what the server outputs and what urls it opens. Also take a look at Django Debug Toolbar and Werkzeug. The debug toolbar can show you all the templates that were rendered and all the local variables. Werkzeug also gives you a debug shell in any place of the call stack right from the browser.