opening json string to easily read and write to in ruby - json

I have a json file. I am using it to store information, and as such it is constantly going to be both read and written.
I am completely new to ruby and oop in general, so I am sure I am going about this in a crazy way.
class Load
def initialize(save_name)
puts "loading " + save_name
#data = JSON.parse(IO.read( $user_library + save_name ))
#subject = #data["subject"]
#id = #data["id"]
#save_name = #data["save_name"]
#listA = #data["listA"] # is an array containing dictionaries
#listB = #data["listB"] # is an array containing dictionaries
end
attr_reader :data, :subject, :id, :save_name, :listA, :listB
end
example = Load.new("test.json")
puts example.id
=> 937489327389749
So I can now easily read the json file, but how could I write back to the file - refering to example? say I wanted to change the id example.id.change(7129371289)... or add dictionaries to lists A and B... Is this possible?

The simplest way to go to/from JSON is to just use the JSON library to transform your data as appropriate:
json = my_object.to_json — method on the specific object to create a JSON string.
json = JSON.generate(my_object) — create JSON string from object.
JSON.dump(my_object, someIO) — create a JSON string and write to a file.
my_object = JSON.parse(json) — create a Ruby object from a JSON string.
my_object = JSON.load(someIO) — create a Ruby object from a file.
Taken from this answer to another of your questions.
However, you could wrap this in a class if you wanted:
class JSONHash
require 'json'
def self.from(file)
self.new.load(file)
end
def initialize(h={})
#h=h
end
# Save this to disk, optionally specifying a new location
def save(file=nil)
#file = file if file
File.open(#file,'w'){ |f| JSON.dump(#h, f) }
self
end
# Discard all changes to the hash and replace with the information on disk
def reload(file=nil)
#file = file if file
#h = JSON.parse(IO.read(#file))
self
end
# Let our internal hash handle most methods, returning what it likes
def method_missing(*a,&b)
#h.send(*a,&b)
end
# But these methods normally return a Hash, so we re-wrap them in our class
%w[ invert merge select ].each do |m|
class_eval <<-ENDMETHOD
def #{m}(*a,&b)
self.class.new #h.send(#{m.inspect},*a,&b)
end
ENDMETHOD
end
def to_json
#h.to_json
end
end
The above behaves just like a hash, but you can use foo = JSONHash.from("foo.json") to load from disk, modify that hash as you would normally, and then just foo.save when you want to save out to disk.
Or, if you don't have a file on disk to begin with:
foo = JSONHash.new a:42, b:17, c:"whatever initial values you want"
foo.save 'foo.json'
# keep modifying foo
foo[:bar] = 52
f.save # saves to the last saved location

Related

Count the number of people having a property bounded by two numbers

The following code goes over the 10 pages of JSON returned by GET request to the URL.
and checks how many records satisfy the condition that bloodPressureDiastole is between the specified limits. It does the job, but I was wondering if there was a better or cleaner way to achieve this in python
import urllib.request
import urllib.parse
import json
baseUrl = 'https://jsonmock.hackerrank.com/api/medical_records?page='
count = 0
for i in range(1, 11):
url = baseUrl+str(i)
f = urllib.request.urlopen(url)
response = f.read().decode('utf-8')
response = json.loads(response)
lowerlimit = 110
upperlimit = 120
for elem in response['data']:
bd = elem['vitals']['bloodPressureDiastole']
if bd >= lowerlimit and bd <= upperlimit:
count = count+1
print(count)
There is no access through fields to json content because you get dict object from json.loads (see translation scheme here). It realises access via __getitem__ method (dict[key]) instead of __getattr__ (object.field) as keys may be any hashible objects not only strings. Moreover, even strings cannot serve as fields if they starts with digits or are the same with built-in dictionary methods.
Despite this, you can define your own custom class realising desired behavior with acceptable key names. json.loads has an argument object_hook wherein you may put any callable object (function or class) that take a dict as the sole argument (not only the resulted one but every one in json recursively) & return something as the result. If your jsons match some template, you can define a class with predefined fields for the json content & even with methods in order to get a robust Python-object, a part of your domain logic.
For instance, let's realise the access through fields. I get json content from response.json method from requests but it has the same arguments as in json package. Comments in code contain remarks about how to make your code more pythonic.
from collections import Counter
from requests import get
class CustomJSON(dict):
def __getattr__(self, key):
return self[key]
def __setattr__(self, key, value):
self[key] = value
LOWER_LIMIT = 110 # Constants should be in uppercase.
UPPER_LIMIT = 120
base_url = 'https://jsonmock.hackerrank.com/api/medical_records'
# It is better use special tools for handling URLs
# in order to evade possible exceptions in the future.
# By the way, your option could look clearer with f-strings
# that can put values from variables (not only) in-place:
# url = f'https://jsonmock.hackerrank.com/api/medical_records?page={i}'
counter = Counter(normal_pressure=0)
# It might be left as it was. This option is useful
# in case of additional counting any other information.
for page_number in range(1, 11):
records = get(
base_url, data={"page": page_number}
).json(object_hook=CustomJSON)
# Python has a pile of libraries for handling url requests & responses.
# urllib is a standard library rewritten from scratch for Python 3.
# However, there is a more featured (connection pooling, redirections, proxi,
# SSL verification &c.) & convenient third-party
# (this is the only disadvantage) library: urllib3.
# Based on it, requests provides an easier, more convenient & friendlier way
# to work with url-requests. So I highly recommend using it
# unless you are aiming for complex connections & url-processing.
for elem in records.data:
if LOWER_LIMIT <= elem.vitals.bloodPressureDiastole <= UPPER_LIMIT:
counter["normal_pressure"] += 1
print(counter)

CSV not importing JSON with correct format into database

Just like the title says, here is my code:
require 'json'
def import_csv
path = Rails.root.join('folder1', 'folder2', 'file.csv')
counter = 0
puts "inserts on table started..."
CSV.foreach(path, headers: true) do |row|
next if row.to_hash['deleted_at'] != nil
counter += 1
puts row.to_json #shows correct format
someModel = someModel.new(row.to_hash) #imports incorrect format of json with backslash in db
#someModel = someModel.new(row.to_json) #ArgumentError: When assigning attributes, you must pass a hash as an argument.
someModel.skip_callbacks = true
someModel.save!
end
puts "#{counter} inserts on table apps complete"
end
import_csv
I can not import the CSV File in the correct format. The import works, but the structure is wrong.
EXPECTED
{"data":{"someData":72}}
GETTING
"{\"data\":{\"someData\":72}}"
How can I import it with the correct JSON format?
If all headers are correct as of the column names of the model
Maybe you can try:
JSON.parse(row.to_json)

How do I cause ActiveModelSerializers to serialize with :attributes and respect my key_transform?

I have a very simple model that I wish to serialize in a Rails (5) API. I want to produce the resulting JSON keys as CamelCase (because that's what my client expects). Because I expect the model to increase in complexity in future, I figured I should use ActiveModelSerializers. Because the consumer of the API expects a trivial JSON object, I want to use the :attributes adapter. But, I cannot seem to get AMS to respect my setting of :key_transform, regardless of whether I set ActiveModelSerializers.config.key_transform = :camel in my configuration file or create the resource via s = ActiveModelSerializers::SerializableResource.new(t, {key_transform: :camel}) (where t represents the ActiveModel object to be serialized) in the controller. In either case, I call render json: s.as_json.
Is this a configuration problem? Am I incorrectly expecting the default :attributes adapter to respect the setting of :key_transform (this seems unlikely, based on my reading of the code in the class, but I'm often wrong)? Cruft in my code? Something else?
If additional information would be helpful, please ask, and I'll edit my question.
Controller(s):
class ApplicationController < ActionController::API
before_action :force_json
private
def force_json
request.format = :json
end
end
require 'active_support'
require 'active_support/core_ext/hash/keys'
class AvailableTrucksController < ApplicationController
def show
t = AvailableTruck.find_by(truck_reference_id: params[:id])
s = ActiveModelSerializers::SerializableResource.new(t, {key_transform: :camel})
render json: s.as_json
end
end
config/application.rb
require_relative 'boot'
require 'rails/all'
Bundler.require(*Rails.groups)
module AvailableTrucks
class Application < Rails::Application
config.api_only = true
ActiveModelSerializers.config.key_transform = :camel
# ActiveModelSerializers.config.adapter = :json_api
# ActiveModelSerializers.config.jsonapi_include_toplevel_object = false
end
end
class AvailableTruckSerializer < ActiveModel::Serializer
attributes :truck_reference_id, :dot_number, :trailer_type, :trailer_length, :destination_states,
:empty_date_time, :notes, :destination_anywhere, :destination_zones
end
FWIW, I ended up taking an end-around to an answer. From previous attempts to resolve this problem, I knew that I could get the correct answer if I had a single instance of my model to return. What the work with ActiveModel::Serialization was intended to resolve was how to achieve that result with both the #index and #get methods of the controller.
Since I had this previous result, I instead extended it to solve my problem. Previously, I knew that the correct response would be generated if I did:
def show
t = AvailableTruck.find_by(truck_reference_id: params[:id])
render json: t.as_json.deep_transform_keys(&:camelize) unless t.nil?
end
What had frustrated me was that the naive extension of that to the array returned by AvailableTruck.all was failing in that the keys were left with snake_case.
It turned out that the "correct" (if unsatisfying) answer was:
def index
trucks = []
AvailableTruck.all.inject(trucks) do |a,t|
a << t.as_json.deep_transform_keys(&:camelize)
end
render json: trucks
end

How can I store a hash for the lifetime of a 'jekyll build'?

I am coding a custom Liquid tag as Jekyll plugin for which I need to preserve some values until the next invocation of the tag within the current run of the jekyll build command.
Is there some global location/namespace that I could use to store and retrieve values (preferably key-value pairs / a hash)?
You could add a module with class variables for storing the persistent values, then include the module in your tag class. You would need the proper accessors depending on the type of the variables and the assignments you might want to make. Here's a trivial example implementing a simple counter that keeps track of the number of times the tag was called in DataToKeep::my_val:
module DataToKeep
##my_val = 0
def my_val
##my_val
end
def my_val= val
##my_val = val
end
end
module Jekyll
class TagWithKeptData < Liquid::Tag
include DataToKeep
def render(context)
self.my_val = self.my_val + 1
return "<p>Times called: #{self.my_val}</p>"
end
end
end
Liquid::Template.register_tag('counter', Jekyll::TagWithKeptData)

Rails & MySQL: Array winds up with ---[] or --- stored in the DB

I am extracting a series of strings from an XML stream and storing them in a mySQL Database (tried with VARCHAR and TEXT). At the start of each array, in the DB, I get --- and then either a [] if it is a blank array or the values.
Rake task code:
#issue = Array.new
items.each do |item| #items is the parsed elements from XML
link_key = item.xpath('key').inner_text
#issue << link_key
Rails.logger.debug("Issue: #{#issue.inspect}")
end
Database value example:
"--- []"
-or-
"---
- CR-3528"
Not sure what else would be useful.
That's because you're serializing an array.
One way to deal with this is to mark a field as serialized with serialize (docs):
serialize :issue
See this for additional details.
If you had been storing the value as text, you should not have seen this--it would have been just the text.