How load Jinja2 template into Environment's loader from database - jinja2

FileSystemLoader loads templates from a directory, Is there anyway I could pull the template from a database as string into loader ?
env = Environment(
#loader=FileSystemLoader(templates),
loader = Filedb('template.j2') # fetch from db ?
undefined=StrictUndefined # Force variable to be defined
)
env.filters['custom_filter'] = func
t = env.get_template("template.j2")

From the Jinja docs:
If you want to create your own loader, subclass BaseLoader and override get_source.
For example:
class DatabaseLoader(BaseLoader):
def __init__(self, database_credentials):
self.database_credentials = database_credentials
def get_source(self, environment, template):
# Load from database... an exercise for the reader.
Because templates can depend on other templates, loading one template could require multiple database lookups. Database lookups could be minimized using bytecode caching to cache compiled templates.
It is also possible to load all of the templates from the database into a dictionary, and then load the dictionary using Jinja's DictLoader.

Related

How can I save my session or my GAN model into a js file

I want to deploy my GANs model on a web-based UI for this I need to convert my model's checkpoints into js files to be called by web code. There are functions for saved_model and Keras to convert into pb files but none for js
my main concern is that I am confused about how to dump a session or variable weights in js files
You can save a keras model from python. There is a full tutorial here but basically it amounts to calling this after training:
tfjs.converters.save_keras_model(model, tfjs_target_dir)
then hosting the result somewhere publicly accessible (or on the same server as your web UI) then you can load your model into tensorflow.js as follows:
import * as tf from '#tensorflow/tfjs';
const model = await tf.loadLayersModel('https://foo.bar/tfjs_artifacts/model.json');

Creating a serving graph separately from training in tensorflow for Google CloudML deployment?

I am trying to deploy a tf.keras image classification model to Google CloudML Engine. Do I have to include code to create serving graph separately from training to get it to serve my models in a web app? I already have my model in SavedModel format (saved_model.pb & variable files), so I'm not sure if I need to do this extra step to get it to work.
e.g. this is code directly from GCP Tensorflow Deploying models documentation
def json_serving_input_fn():
"""Build the serving inputs."""
inputs = {}
for feat in INPUT_COLUMNS:
inputs[feat.name] = tf.placeholder(shape=[None], dtype=feat.dtype)
return tf.estimator.export.ServingInputReceiver(inputs, inputs)
You are probably training your model with actual image files, while it is best to send images as encoded byte-string to a model hosted on CloudML. Therefore you'll need to specify a ServingInputReceiver function when exporting the model, as you mention. Some boilerplate code to do this for a Keras model:
# Convert keras model to TF estimator
tf_files_path = './tf'
estimator =\
tf.keras.estimator.model_to_estimator(keras_model=model,
model_dir=tf_files_path)
# Your serving input function will accept a string
# And decode it into an image
def serving_input_receiver_fn():
def prepare_image(image_str_tensor):
image = tf.image.decode_png(image_str_tensor,
channels=3)
return image # apply additional processing if necessary
# Ensure model is batchable
# https://stackoverflow.com/questions/52303403/
input_ph = tf.placeholder(tf.string, shape=[None])
images_tensor = tf.map_fn(
prepare_image, input_ph, back_prop=False, dtype=tf.float32)
return tf.estimator.export.ServingInputReceiver(
{model.input_names[0]: images_tensor},
{'image_bytes': input_ph})
# Export the estimator - deploy it to CloudML afterwards
export_path = './export'
estimator.export_savedmodel(
export_path,
serving_input_receiver_fn=serving_input_receiver_fn)
You can refer to this very helpful answer for a more complete reference and other options for exporting your model.
Edit: If this approach throws a ValueError: Couldn't find trained model at ./tf. error, you can try it the workaround solution that I documented in this answer.

Load jinja2 templates dynamically on a Pyramid view

I'm developing a Pyramid project with jinja2 templating engine. Following the jinja2 documentation I've find out a way to load different templates from a unique view. But taking into account that the module pyramid_jinja2 was already configured in my app with a default path for templates. I was wondering if there is another way more elegant to get this done. This is my approach:
from jinja2 import Environment, PackageLoader
#view_config(context=Test)
def test_view(request):
env = Environment(loader=PackageLoader('project_name', 'templates'))
template = env.get_template('section1/example1.jinja2')
return Response(template.render(data={'a':1,'b':2}))
Can I get an instance of the pyramid_jinja2 environment from somewhere so I don't have to set again the default path for templates in the view?
The following is enough:
from pyramid.renderers import render
template = "section/example1.jinja2"
context = dict(a=1, b=2)
body = render(template, context, request=request)
And to configure loading do in your __init__.py:
config.add_jinja2_search_path('project_name:templates', name='.jinja2', prepend=True)

Accessing data in local drives from controller of grails application

I have my data in the form of a CSV file at the following location C:\xyz\data.csv in my system. How can I access this data from the controller of my grails application? Is it possible to do so? If yes, how? Any help would be much appreciated.
for reading file (cvs, xml, img ,...) from controller
def csv = grailsAttributes.getApplicationContext().getResource("/data/data.csv").getFile()
try this for your case...
If your Grails application runs on the same system you can access the file like this.
class MyController {
def myAction() {
def myFile = new File('your path')
def content = myFile.text
}
}
If your webapp runs somewhere else (which is most likely your case) you can use the file upload.

How to use CoffeeScript instead of JSON? For configuration files etc

JSON really is a pain to use for local configuration files as it does not support comments or functions, and requires incredibly verbose syntax (commas, always use " for keys). Making it very error prone, or in the case where functions are required, impossible to use.
Now I know that I could just do:
require('coffee-script')
config = require('config.coffee')
However, that requires me to do module.exports = {the data} inside config.coffee which is less than ideal. And even allows for things such as require to be exposed which can make the configuration files insecure if we do not trust them.
Has anyone found a way to read coffeescript configuration files, but keep them secure?
Turns out CoffeeScript has support for the security part built in via setting the sandbox argument to true via the eval call. E.g.
# Prepare
fsUtil = require('fs')
coffee = require('coffee-script')
# Read
dataStr = fsUtil.readFileSync('path').toString()
data = coffee.eval(dataStr, {sandbox:true})
The above code will read in the file data, then eval it with coffeescript in sandbox mode.
I've created a nice wrapper for this called CSON which supports coffee and js files via require, and cson files via the above mechanism, and json files via the typical JSON.parse - as well as stringifying the values back to coffeescript notation. Using this, the following API is exposed:
# Include CSON
CSON = require('cson')
# Parse a file path
CSON.parseFile 'data.cson', (err,obj) -> # async
result = CSON.parseFile('data.cson') # sync
# Parse a string
CSON.parse src, (err,obj) -> # async
result = CSON.parseSync(src) # sync
# Stringify an object to CSON
CSON.stringify data, (err,str) -> # async
result = CSON.stringifySync(obj) # sync