So here is what i'm trying to do. I'm building a simply Ruby file that will as the user for input, a city, and then return weather results for that city. I've never written in Ruby nor have I ever used API's. But here is my attempt.
The API response below:
> {"coord"=>{"lon"=>-85.68, "lat"=>40.11}, "weather"=>[{"id"=>501,
> "main"=>"Rain", "description"=>"moderate rain", "icon"=>"10d"}],
> "base"=>"stations", "main"=>{"temp"=>57.78, "pressure"=>1009,
> "humidity"=>100, "temp_min"=>57, "temp_max"=>60.01},
> "wind"=>{"speed"=>5.17, "deg"=>116.005}, "rain"=>{"1h"=>1.02},
> "clouds"=>{"all"=>92}, "dt"=>1475075671, "sys"=>{"type"=>3,
> "id"=>187822, "message"=>0.1645, "country"=>"US",
> "sunrise"=>1475062634, "sunset"=>1475105280}, "id"=>4917592,
> "name"=>"Anderson", "cod"=>200} [Finished in 2.0s]
The Ruby file below:
require 'net/http'
require 'json'
url = 'http://api.openweathermap.org/data/2.5/weather?q=anderson&APPID=5c89010425b4d730b7558f57234ea3c8&units=imperial'
uri = URI(url)
response = Net::HTTP.get(uri)
parsed = JSON.parse(response)
puts parsed #Print this so I can see results
inputs temp = JSON.parse(response)['main']['temp']
puts desc = JSON.parse(response)['weather']['description']
puts humid = JSON.parse(response)['main']['humidity']
puts wind = JSON.parse(response)['wind']['speed']
What I was trying to do was only pull out a few items like temperature,description, humidity, and wind. But I can't seem to get it right. I keep getting undefined errors with each attempt.
(Wanting to complete this without using gems or anything that isn't already built into Ruby) (I have not written the parts for user input yet)
Your problem is that response['weather'] is an array, so you won't be able to access ['weather']['description'], instead you will have to do something like ['weather'][0]['description'].
2.3.0 :020 > puts parsed['weather'][0]['description']
moderate rain
2.3.0 :021 > puts parsed['main']['humidity']
100
2.3.0 :022 > puts parsed['wind']['speed']
5.17
2.3.0 :025 > puts parsed['main']['temp']
58.8
Related
Trying to display second level information about characters from this Futurama API.
Currently using this code to get information:
def self.character
uri = URI.parse(URL)
response = Net::HTTP.get_response(uri)
data = JSON.parse(response.body)
data.each do |c|
Character.new(c["name"], c["gender"], c["species"], c["homePlanet"], c["occupation"], c["info"], c["sayings"])
end
end
I'm then stuck either returning (gender and species) from the nested hash (if character id > 8) or the original hash (character id < 8) when using this code:
def character_details(character)
puts "Name: #{character.name["first"]} #{character.name["middle"]} #{character.name["last"]}"
puts "Species: #{character.info["species"]}"
puts "Occupation: #{character.homePlanet}"
puts "Gender: #{character.info["gender"]}"
puts "Quotes:"
character.sayings.each_with_index do |s, i|
iplusone = i + 1
puts "#{iplusone}. #{s} "
end
end
Not sure where or what logic to use to get the correct information to display.
Maybe you have a problem when save c['info] in Character.new(c["name"], c["gender"], c["species"], c["homePlanet"], c["occupation"], c["info"], c["sayings"])
I'm running your code and I see info does not exist in the response of API, the gender should be accessed in character.gender
irb(main):037:0> character.gender
=> "Male"
irb(main):039:0> character.species
=> "Human"
I don't understand this comment: (if character id > 8) or the original hash (character id < 8) Can you explain us what u need do?
Searched online and read through the documents, but have not been able to find an answer. I am fairly new and part of learning Ruby I wanted to make the script below.
The Script essentially does a Carrier Lookup on a list of numbers that are provided through a CSV file. The CSV file has just one row with the column header "number".
Everything runs fine UNTIL the API gives me an output that is different from the others. In this example, it tells me that one of the numbers in my file is not a valid US number. This then causes my script to stop running.
I am looking to see if there is a way to either ignore it (I read about Begin and End, but was not able to get it to work) or ideally either create a separate file with those errors or just put the data into the main file.
Any help would be much appreciated. Thank you.
Ruby Code:
require 'csv'
require 'uri'
require 'net/http'
require 'json'
number = 0
CSV.foreach('data1.csv',headers: true) do |row|
number = row['number'].to_i
uri = URI("https://api.message360.com/api/v3/carrier/lookup.json?PhoneNumber=#{number}")
req = Net::HTTP::Post.new(uri)
req.basic_auth 'XXX' , 'XXX'
res = Net::HTTP.start(uri.hostname, uri.port, :use_ssl => true) {|http|
http.request(req)
}
json = JSON.parse(res.body)
new = json["Message360"]["Carrier"].values
CSV.open("new.csv", "ab") do |csv|
csv << new
end
end
File Data:
number
5556667777
9998887777
Good Response example in JSON:
{"Message360"=>{"ResponseStatus"=>1, "Carrier"=>{"ApiVersion"=>"3", "CarrierSid"=>"XXX", "AccountSid"=>"XXX", "PhoneNumber"=>"+19495554444", "Network"=>"Cellco Partnership dba Verizon Wireless - CA", "Wireless"=>"true", "ZipCode"=>"92604", "City"=>"Irvine", "Price"=>0.0003, "Status"=>"success", "DateCreated"=>"2018-05-15 23:05:15"}}}
The response that causes Script to stop:
{
"Message360": {
"ResponseStatus": 0,
"Errors": {
"Error": [
{
"Code": "ER-M360-CAR-111",
"Message": "Allowed Only Valid E164 North American Numbers.",
"MoreInfo": []
}
]
}
}
}
It would appear you can just check json["Message360"]["ResponseStatus"] first for a 0 or 1 to indicate failure or success.
I'd probably add a rescue to help catch any other errors (malformed JSON, network issue, etc.)
CSV.foreach('data1.csv',headers: true) do |row|
number = row['number'].to_i
...
json = JSON.parse(res.body)
if json["Message360"]["ResponseStatus"] == 1
new = json["Message360"]["Carrier"].values
CSV.open("new.csv", "ab") do |csv|
csv << new
end
else
# handle bad response
end
rescue StandardError => e
# request failed for some reason, log e and the number?
end
ok I am trying to create a definition which will read a list of IDS from an external Json file, Which it is doing. Its even putting the data into the database on load of the program, my issue is this. I cant seem to match the list IDs to a comparison. Here is my current code:
def check(account):
global ID_account
import json, httplib
if not hasattr(BigWorld, 'iddata'):
UID_DB = account['databaseID']
UID = ID_account
try:
conn = httplib.HTTPConnection('URL')
conn.request('GET', '/ids.json')
conn.sock.settimeout(2)
resp = conn.getresponse()
qresp = resp.read()
BigWorld.iddata = json.loads(qresp)
LOG_NOTE('[ABRO] Request of URL data successful.')
conn.close()
except:
LOG_NOTE('[ABRO] Http request to URL problem. Loading local data.')
if UID_DB is not None:
list = BigWorld.iddata["ids"]
#print (len(list) - 1)
for n in range(0, (len(list) - 1)):
#print UID_DB
#print list[n]
if UID_DB == list[n]:
#print '[ABRO] userid located:'
#print UID_DB
UID = UID_DB
else:
LOG_NOTE('[ABRO] userid not set.')
if 'databaseID' in account and account['databaseID'] != UID:
print '[ABRO] Account not active in database, game closing...... '
BigWorld.quit()
now my json file looks like this:
{
"ids":[
"1001583757",
"500687699",
"000000000"
]
}
now when I run this with all the commented out prints it seems to execute perfectly fine up till it tries to do the match inside the for loop. Even when the print shows UID_DB and list[n] being the same values, it does not set my variable, it doesn't post any errors, its just simply acting as if there was no match. am I possibly missing a loop break? here is the python log starting with the print of the length of the table print:
INFO: 2
INFO: 1001583757
INFO: 1001583757
INFO: 1001583757
INFO: 500687699
INFO: [ABRO] Account not active, game closing......
as you can see from the log, its never printing the User located print, so it is not matching them. its just continuing with the loop and using the default ID I defined above the definition. Anyone with an idea would definitely help me out as ive been poking and prodding this thing for 3 days now.
the answer to this was found by #VikasNehaOjha it was missing simply a conversion to match types before the match comparison I did this by adding in
list[n] = int(list[n])
that resolved my issue and it finally matched comparisons.
I am trying to read very huge json file using R , and I am using the RJSON library with this commend json_data <- fromJSON(paste(readLines("myfile.json"), collapse=""))
The problem is that I am getting this error message
Error in paste(readLines("myfile.json"), collapse = "") :
could not allocate memory (2383 Mb) in C function 'R_AllocStringBuffer'
Can anyone help me with this issue
Well, just sharing my experience about read json file. the progress of
I am trying to read 52.8MB,19.7MB,1.3GB,93.9MB,158.5MB json files cost me 30minutes and finally auto resume R session, after that tried to apply parallel computing and would like to see the progress but failed.
https://github.com/hadley/plyr/issues/265
And then I tried to add the parameter pagesize = 10000, its work and more efficient then ever. Well, we only need read once and later save as RData/Rda/Rds format as by saveRDS.
> suppressPackageStartupMessages(library('BBmisc'))
> suppressAll(library('jsonlite'))
> suppressAll(library('plyr'))
> suppressAll(library('dplyr'))
> suppressAll(library('stringr'))
> suppressAll(library('doParallel'))
>
> registerDoParallel(cores=16)
>
> ## https://www.kaggle.com/c/yelp-recsys-2013/forums/t/4465/reading-json-files-with-r-how-to
> ## https://class.coursera.org/dsscapstone-005/forum/thread?thread_id=12
> fnames <- c('business','checkin','review','tip','user')
> jfile <- paste0(getwd(),'/yelp_dataset_challenge_academic_dataset/yelp_academic_dataset_',fnames,'.json')
> dat <- llply(as.list(jfile), function(x) stream_in(file(x),pagesize = 10000),.parallel=TRUE)
> dat
list()
> jfile
[1] "/home/ryoeng/Coursera-Data-Science-Capstone/yelp_dataset_challenge_academic_dataset/yelp_academic_dataset_business.json"
[2] "/home/ryoeng/Coursera-Data-Science-Capstone/yelp_dataset_challenge_academic_dataset/yelp_academic_dataset_checkin.json"
[3] "/home/ryoeng/Coursera-Data-Science-Capstone/yelp_dataset_challenge_academic_dataset/yelp_academic_dataset_review.json"
[4] "/home/ryoeng/Coursera-Data-Science-Capstone/yelp_dataset_challenge_academic_dataset/yelp_academic_dataset_tip.json"
[5] "/home/ryoeng/Coursera-Data-Science-Capstone/yelp_dataset_challenge_academic_dataset/yelp_academic_dataset_user.json"
> dat <- llply(as.list(jfile), function(x) stream_in(file(x),pagesize = 10000),.progress='=')
opening file input connection.
Imported 61184 records. Simplifying into dataframe...
closing file input connection.
opening file input connection.
Imported 45166 records. Simplifying into dataframe...
closing file input connection.
opening file input connection.
Found 470000 records...
I got the same problem while working with huge datasets in R.I had used jsonlite package in R for reading json in R.I had used the following code to read json in R:
library(jsonlite)
get_tweets <- stream_in(file("tweets.json"),pagesize = 10000)
here tweets.json is the my file name and the location where it exists,pagesize represents how many number of lines it reads in one iteration.Hope it helps.
For some reason the above solutions all caused R to terminate or worse.
This solution worked for me, with the same data set:
library(jsonlite)
file_name <- 'C:/Users/Downloads/yelp_dataset/yelp_dataset~/dataset/business.JSON'
business<-jsonlite::stream_in(textConnection(readLines(file_name, n=100000)),verbose=F)
Took about 15 minutes
With Ruby 1.8, FeedTools is able to get and parse rss/atom feed links given a non-feed link. For eg:
ruby-1.8.7-p174 > f = FeedTools::Feed.open("http://techcrunch.com/")
=> #<FeedTools::Feed:0xc99cf8 URL:http://feeds.feedburner.com/TechCrunch>
ruby-1.8.7-p174 > f.title
=> "TechCrunch"
Whereas, with JRuby 1.5.2, FeedTools is unable to get and parse rss/atom feed links given a non-feed link. For eg:
jruby-1.5.2 > f = FeedTools::Feed.open("http://techcrunch.com/")
=> #<FeedTools::Feed:0x1206 URL:http://techcrunch.com/>
jruby-1.5.2 > f.title
=> nil
At times, it also gives the following error:
FeedTools::FeedAccessError: [URL] does
not appear to be a feed.
Any ideas on how I can get FeedTools to work with JRuby?
There seems to be a bug in the feedtools gem. In the method to locate feed links with a given mime type, replace 'lambda' with 'Proc.new' to return from the method from inside the proc when the feed link is found.
--- a/feedtools-0.2.29/lib/feed_tools/helpers/html_helper.rb
+++ b/feedtools-0.2.29/lib/feed_tools/helpers/html_helper.rb
## -620,7 +620,7 ##
end
end
get_link_nodes.call(document.root)
- process_link_nodes = lambda do |links|
+ process_link_nodes = Proc.new do |links|
for link in links
next unless link.kind_of?(REXML::Element)
if link.attributes['type'].to_s.strip.downcase ==