`uninitialized constant` error - jruby

I have following jruby code that uses java class javax.naming.InitialContext:
if RUBY_PLATFORM == "java"
require 'java'
import javax.naming.InitialContext
module JndiProperties
def self.getProperty(name)
begin
env.lookup(name).to_s
rescue
nil
end
end
def self.[](name)
getProperty(name)
end
private
def self.env
context = InitialContext.new
environment = context.lookup 'java:comp/env'
environment
end
end
else
module JndiProperties
def self.getProperty(name)
nil
end
def self.[](name)
getProperty(name)
end
end
end
I use this module in database.yml to configure database connection. E.g.:
username: <%= JndiProperties['ANTARCTICLE_DB_USER'] || 'root' %>
When I try to run rails application, i get uninitialized constant JndiProperties::InitialContext. If i try to use this module from irb, it will work as expected.

just put the import line into the module :
module JndiProperties
java_import 'javax.naming.InitialContext'
end
as it uses const_missing to resolve or assign the constant manually :
InitialContext = Java::JavaxNaming::InitialContext
than it should work even outside the module

Related

Why can't my JSON file be found using require_relative from another file?

I am new to Ruby and programming and am building my first terminal application. Any help is greatly appreciated.
Why does the app crash at rescue when I call it from main.rb but can locate and write to log.json when I run the app from json.rb ?
Specifically, I am having trouble locating my log.json file when running the app.
The app runs by calling a method from my main.rb file, located in the root directory.
I am trying to access my JSON file named log.json, located in the methods directory.
When I include require_relative 'methods/json.rb in my main.rb file, my app crashes producing:
/src/methods/log.json (LoadError)
from main.rb:11:in `<main>'
Alternatively, if I run the app from my file json.rb which writes to log.json
the app does not crash and the input is successfully added to log.json
require 'json'
require 'colorize'
class NotValidJsonError < StandardError
end
class FileNotFoundError < StandardError
end
# WRITE TO JSON FILE
def write_json_file(username)
#ERROR HANDLING
# file_path = './log.json'
begin
# file = File.read(file_path)
file = File.read(File.expand_path( "log.json", __dir__))
rescue => e
raise FileNotFoundError,"Could not find file"
puts e.message
puts e.backtrace.inspect
end
begin
json = JSON.parse(file)
rescue => e
raise NotValidJsonError,"Input is not valid Json "
puts e.message
puts e.backtrace.inspect
end
log_hash = Hash.new
log_hash["name"] = username
puts "Which Level did you practice today?".colorize(:blue)
log_hash ["Level"] = gets.chomp
puts "What Key Signature did you practice?".colorize(:cyan)
log_hash ["Key"] = gets.chomp
json.push(log_hash)
File.open('./log.json', 'w') do |f|
f.puts JSON.pretty_generate(json)
end
end
# WRITE TO JSON FILE
def read_json_file(username)
#ERROR HANDLING
file = File.read('./log.json')
json = JSON.parse(file)
json.each do |hash|
if hash["Name"] == username
puts "Your Level is #{hash["Level"]} and Key #{hash["Key"]} is"
end
end
end
Here is my main.rb code
#Relative methods
require_relative 'methods/welcome_page.rb'
require_relative 'methods/username_prompt'
require_relative 'methods/prompt_one.rb'
require_relative 'methods/challenge_selection.rb'
require_relative 'methods/key_signature_selection.rb'
require_relative 'methods/level_plus_key_calculator.rb'
require_relative 'methods/displayed_progression.rb'
require_relative 'methods/json.rb'
#Relative classes
require_relative 'classes/chord_progression'
#run program
username_prompt

Is there a way to ensure that all my ctypes have argtypes?

I know I should specify argtypes for my C/C++ functions since some of my calls would otherwise result in stack corruption.
myCfunc.argtypes = [ct.c_void_p, ct.POINTER(ct.c_void_p)]
myCfunc.errcheck = my_error_check
In fact, I would like to verify that I did not forget to specify function prototypes (argtypes/errcheck) for any of my about 100 function calls...
Right now I just grep through my Python files and visually compare against my file containing the prototype definitions.
Is there a better way to verify that I have defined argtypes/errcheck for all my calls?
The mention of namespaces by #eryksun made me wrap the dll in a class that only exposes the explicitly annotated functions. As long as the dll doesn't have the function names "annotate" or "_error_check" (which my didn't), the following approach seems to work for me:
import ctypes as ct
class MyWinDll:
def __init__(self, dll_filename):
self._dll = ct.WinDLL(dll_filename)
# Specify function prototypes using the annotate function
self.annotate(self._dll.myCfunc, [ct.POINTER(ct.c_void_p)], self._error_check)
self.annotate(self._dll.myCfunc2, [ct.c_void_p], self._error_check)
...
def annotate(self, function, argtypes, errcheck):
# note that "annotate" may not be used as a function name in the dll...
function.argtypes = argtypes
function.errcheck = errcheck
setattr(self, function.__name__, function)
def _error_check(self, result, func, arguments):
if result != 0:
raise Exception
if __name__ == '__main__':
dll = MyWinDll('myWinDll.dll')
handle = ct.c_void_p(None)
# Now call the dll functions using the wrapper object
dll.myCfunc(ct.byref(handle))
dll.myCfunc2(handle)
Update: Comments by #eryksun made me try to improve the code by giving the user control of the WinDLL constructor and attempting to reduce repeated code:
import ctypes as ct
DEFAULT = object()
def annotate(dll_object, function_name, argtypes, restype=DEFAULT, errcheck=DEFAULT):
function = getattr(dll_object._dll, function_name)
function.argtypes = argtypes
# restype and errcheck is optional in the function_prototypes list
if restype is DEFAULT:
restype = dll_object.default_restype
function.restype = restype
if errcheck is DEFAULT:
errcheck = dll_object.default_errcheck
function.errcheck = errcheck
setattr(dll_object, function_name, function)
class MyDll:
def __init__(self, ct_dll, **function_prototypes):
self._dll = ct_dll
for name, prototype in function_prototypes.items():
annotate(self, name, *prototype)
class OneDll(MyDll):
def __init__(self, ct_dll):
# set default values for function_prototypes
self.default_restype = ct.c_int
self.default_errcheck = self._error_check
function_prototypes = {
'myCfunc': [[ct.POINTER(ct.c_void_p)]],
'myCfunc2': [[ct.c_void_p]],
# ...
'myCgetErrTxt': [[ct.c_int, ct.c_char_p, ct.c_size_t], DEFAULT, None]
}
super().__init__(ct_dll, **function_prototypes)
# My error check function actually calls the dll, so I keep it here...
def _error_check(self, result, func, arguments):
msg = ct.create_string_buffer(255)
if result != 0:
raise Exception(self.myCgetErrTxt(result, msg, ct.sizeof(msg)))
if __name__ == '__main__':
ct_dll = ct.WinDLL('myWinDll.dll')
dll = OneDll(ct_dll)
handle = ct.c_void_p(None)
dll.myCfunc(ct.byref(handle))
dll.myCfunc2(handle)
(I don't know if original code should be deleted, I kept it for reference.)
Here's a dummy class that can replace the DLL object's function call with a simple check to see the attributes have been defined:
class DummyFuncPtr(object):
restype = False
argtypes = False
errcheck = False
def __call__(self, *args, **kwargs):
assert self.restype
assert self.argtypes
assert self.errcheck
def __init__(self, *args):
pass
def __setattr__(self, key, value):
super(DummyFuncPtr, self).__setattr__(key, True)
To use it replace your DLL object's _FuncPtr class and then call each function to run the check, e.g.:
dll = ctypes.cdll.LoadLibrary(r'path/to/dll')
# replace the DLL's function pointer
# comment out this line to disable the dummy class
dll._FuncPtr = DummyFuncPtr
some_func = dll.someFunc
some_func.restype = None
some_func.argtypes = None
some_func.errcheck = None
another_func = dll.anotherFunc
another_func.restype = None
another_func.argtypes = None
some_func() # no error
another_func() # Assertion error due to errcheck not defined
The dummy class completely prevents the function from ever being called of course, so just comment out the replacement line to switch back to normal operation.
Note that it will only check each function when that function is called, so this would best be in a unit test file somewhere where the function is guaranteed to be called.

Defining module variables from functions

I've been finally getting into Python, and have noticed something strange, that works in Java, but not in Python.
When I type the following:
fn = "" # Local filename storage.
def read(filename):
fn = filename
return open(filename, 'r').read()
My flake8 linter for Atom gives me the following error:
F841 - local variable 'fn' is assigned to but never used.
I'm assuming this means that the variable is being defined on the def level, and not the module level, which I intend on doing. Please correct me if I'm wrong.
I've searched Google, with multiple wordings, but can't seem to word it in a way that the correct results display...
Any ideas on how I can be able to achieve module-level variable definitions from the function-level?
If you want to declare fn as a global variable (module-level), use global statement.
def read(filename):
global fn # <-----
fn = filename
return open(filename, 'r').read()
BTW, ; is optional. Don't use it.
You can set a module level variable from the function by doing:
import sys
def read(filename):
module = sys.modules[__name__]
setattr(module, 'fn', filename)
return open(filename, 'r').read()
However, it's a very strange necessity. Consider to change your architecture.
UPD: Let's consider an example:
# module1
# uncomment it to fix NameError and AttributeError
# some_var = ''
def foo(val):
global some_var
some_var = val
# module2
from module1 import *
print(some_var) # raises NameError: name 'some_var' is not defined
foo('bar')
print(some_var) # still raises NameError: name 'some_var' is not defined
# module3
import module1
print(module1.some_var) # raises AttributeError: 'module' object has no attribute 'some_var'
foo('bar')
print(module1.some_var) # prints 'bar' even without some_var = '' definition in the module1
So, it's not so obvious how global behaves during the import process. I think, that manually doing setattr(module, 'attr_name', value) during the read() call is more clear.

Rails 3 CSV imports to raise errors in resulting view

A CSV import process intends to import valid records and advise of errors on bad records in a resulting action.
rescue outputs can go to the console, and in development mode rendered to the user. But in production mode these need to be captured and rendered.
The model defines the import
def self.importtest(file, analisi_id)
n, errs = 0, []
CSV.foreach(file.path, :col_sep => "\t", headers: true, skip_blanks: true) do |row|
n += 1
# skip blank row
next if row.join.blank?
begin
operativ = Operativ.find_by_account(row[3])
if operativ.nil?
errs << row
end
end
end
if errs != []
# send errors to form and render to user
else
# run full import routine
and the controller
def import
params[:analysis_id] = session[:analysis_id]
Bilancino.import(params[:file], params[:analysis_id])
redirect_to loaded_registrations_path, notice: "data imported"
end
There are two problems here. The first is how to populate a form with the error data. The second is how to re-direct in each case errs != [] and errs == []...

Jekyll collections: process markdown within a plugin

I'm trying to adapt an existing Jekyll plugin (taken from here) in order to generate a .json version of every document in a collection.
However, I'm having trouble getting my content to convert from markdown to HTML (which I would then like to convert/encode into JSON). In Jekyll, a collection "document" is different from a "post", and while posts have access to a transform method that does what I need, it looks like "documents" do not.
Is there some other straight-forward way to feed content to a markdown parser in the context of a Jekyll plug-in?
Here's the plug-in code I've been working with so far. This is generating JSON, but markdown is not being converted into HTML (markdown syntax like ** remains in the file).
module Jekyll
class JSONPage < Page
def initialize(site, base, dir, name, content)
#site = site
#base = base
#dir = dir
#name = name
self.data = {}
self.content = content
process(#name)
end
def read_yaml(*)
# Do nothing
end
def render_with_liquid?
false
end
end
class JSONPageGenerator < Generator
safe true
def generate(site)
site.documents.each do |document|
# Set the path of the JSON version
path = "#{document.collection.label}" + document.cleaned_relative_path + ".json"
output = document.to_liquid
# Delete unnecessary metadata
['layout', 'output'].each { |key| output.delete(key) }
site.pages << JSONPage.new(site, site.source, File.dirname(path), File.basename(path), output)
end
end
end
end
Okay, turns out the answer is simple – you can just require 'kramdown' or any other markdown generator directly in the plugin.
module Jekyll
class JSONPage < Page
def initialize(site, base, dir, name, content)
#site = site
#base = base
#dir = dir
#name = name
self.data = {}
self.content = content
process(#name)
end
def read_yaml(*)
# Do nothing
end
def render_with_liquid?
false
end
end
class JSONPostGenerator < Generator
safe true
def generate(site)
require 'kramdown'
site.documents.each do |document|
# Set the path of the JSON version
path = "#{document.collection.label}" + document.cleaned_relative_path + ".json"
output = document.to_liquid
output['content'] = Kramdown::Document.new(document.content).to_html.gsub(/\n/, "")
# Delete unnecessary metadata
['layout', 'output'].each { |key| output.delete(key) }
site.pages << JSONPage.new(site, site.source, File.dirname(path), File.basename(path), output.to_json)
end
end
end
end