Store json-like hierarchical data as nested directory tree? - json

TLDR
I am looking for an existing convention to encode / serialize tree-like data in a directory structure, split into small files instead of one big file.
Background
There are different scenarios where we want to store tree-like data in a file, which can then be tracked in git. Json files can express dependencies for a package manager (e.g. composer for php, npm for node.js). Yml files can define routes, test cases, etc.
Typically a "tree structure" is a combination of key-value lists and "serial" lists, where each value can again be a tree structure.
Very often the order of associative keys is irrelevant, and should ideally be normalized to alphabetic order.
One problem when storing a big tree structure in a single file, be it json or yml, which is then tracked with git, is that you get plenty of merge conflicts if different branches add and remove entries in the same key-value list.
Especially for key-value lists where the order is irrelevant, it would be more git-friendly to store each sub-tree in a separate file or directory, instead of storing them all in one big file.
Technically it should be possible to create a directory structure that is as expressive as json or yml.
Performance concerns can be overcome with caching. If the files are going to be tracked in git, we can assume they are going to be unchanged most of the time.
The main challenges:
- How to deal with "special characters" that cause problems in some or most file systems, if used in a file or directory name?
- If I need to encode or disambiguate special characters, how can I still keep it pleasant to the eye?
- How to deal with limitations to file name length in some file systems?
- How to deal with other file system quirks, e.g. case insensitivity? Is this even still a thing?
- How to express serial lists, which might contain key-value lists as children? Serial lists cannot be expressed as directories, so its children have to live within the same file.
- How can I avoid to reinvent the wheel, creating my own made-up "convention" that nobody else uses?
Desired features:
- As expressive as json or yml.
- git-friendly.
- Machine-readable and -writable.
- Human-readable and -editable, perhaps with limitations.
- Ideally it should use known formats (json, yml) for structures and values that are expressed within a single file.
Naive approach
Of course the first idea would be to use yml files for literal values and serial lists, and directories for key-value lists (in cases where the order does not matter). In a key-value list, the file or directory names are interpreted as keys, the files and subdirectories as values.
This has some limitations, because not every possible key that would be valid in json or yml is also a valid file name in every file system. The most obvious example would be a slash.
Question
I have different ideas how I would do this myself.
But I am really looking for some kind of convention for this that already exists.
Related questions
Persistence: Data Trees stored as Directory Trees
This is asking about performance, and about using the filesystem like a database - I think.
I am less interested in performance (caching makes it irrelevant), and more about the actual storage format / convention.

The closest thing I can think of that could be seen as some kind of convention for doing this are Linux configuration files. In modern Linux, you often split the configuration of a service into multiple files residing in a certain directory, e.g. /etc/exim4/conf.d/ instead of having a single file /etc/exim/exim4.conf. There are multiple reasons doing this:
Some configuration may be provided by the package manager (e.g. linking to other services that are installed via package manager), while other parts are user-defined. Since there would be a conflict if the user edits a file provided by the package manager, they can instead create a new file and enter additional configuration there.
For large configuration files (like for exim4), it's easier to navigate the configuration if you have multiple files for different concerns (hardcore vim users might disagree).
You can enable / disable parts of the configuration easier by renaming / moving the file that contains a certain part.
We can learn a bit from this: Separation into distinct files should happen if the semantic of the content is orthogonal, i.e. the semantic of one file does not depend on the semantic of another file. This is of course a rule for sibling files; we cannot really deduct rules for serializing a tree structure as directory tree from it. However, we can definitely see reasons for not splitting every value in an own file.
You mention problems of encoding special characters into a file name. You will only have this problem if you go against conventions! The implicit convention on file and directory names is that they act as locator / ID for files, never as content. Again, we can learn a bit from Linux config files: Usually, there is a master file that contains an include statement which loads all the split files. The include statement gives a path glob expression which locates the other files. The path to those files is irrelevant for the semantics of their content. Technically, we can do something similar with YAML.
Assume we want to split this single YAML file into multiple files (pardon my lack of creativity):
spam:
spam: spam
egg: sausage
baked beans:
- spam
- spam
- bacon
A possible transformation would be this (read stuff ending with / as directory, : starts file content):
confdir/
main.yaml:
spam: !include spammap/main.yaml
baked beans: !include beans/
spammap/
main.yaml:
spam: !include spam.yaml
egg: !include egg.yaml
spam.yaml:
spam
egg.yaml:
sausage
beans/
1.yaml:
spam
2.yaml:
spam
3.yaml:
bacon
(In YAML, !include is a local tag. With most implementations, you can register a custom constructor for it, thus loading the whole hierarchy as single document.)
As you can see, I put every hierarchy level and every value into a separate file. I use two kinds of includes: A reference to a file will load the content of that file; a reference to a directory will generate a sequence where each item's value is the content of one file in that directory, sorted by file name. As you can see, the file and directory names are never part of the content, sometimes I opted to name them differently (e.g. baked beans -> beans/) to avoid possible file system problems (spaces in filenames in this case – usually not a serious problem nowadays). Also, I adhere to the filename extension convention (having the files carry .yaml). This would be more quirky if you put content into the file names.
I named the starting file on each level main.yaml (not needed in beans/ since it's a sequence). While the exact name is arbitrary, this is a convention used in several other tools, e.g. Python with __init__.py or the Nix package manager with default.nix. Then I placed additional files or directories besides this main file.
Since including other files is explicit, it is not a problem with this approach to put a larger part of the content into a single file. Note that JSON lacks YAML's tags functionality, but you can still walk through a loaded JSON file and preprocess values like {"!include": "path"}.
To sum up: While there is not directly a convention how to do what you want, parts of the problem have been solved at different places and you can inherit wisdom from that.
Here's a minimal working example of how to do it with PyYAML. This is just a proof of concept; several features are missing (e.g. autogenerated file names will be ascending numbers, no support for serializing lists into directories). It shows what needs to be done to store information about the data layout while being transparent to the user (data can be accessed like a normal dict structure). It remembers file names stuff has been loaded from and stores to those files again.
import os.path
from pathlib import Path
import yaml
from yaml.reader import Reader
from yaml.scanner import Scanner
from yaml.parser import Parser
from yaml.composer import Composer
from yaml.constructor import SafeConstructor
from yaml.resolver import Resolver
from yaml.emitter import Emitter
from yaml.serializer import Serializer
from yaml.representer import SafeRepresenter
class SplitValue(object):
"""This is a value that should be written into its own YAML file."""
def __init__(self, content, path = None):
self._content = content
self._path = path
def getval(self):
return self._content
def setval(self, value):
self._content = value
def __repr__(self):
return self._content.__repr__()
class TransparentContainer(object):
"""Makes SplitValues transparent to the user."""
def __getitem__(self, key):
val = super(TransparentContainer, self).__getitem__(key)
return val.getval() if isinstance(val, SplitValue) else val
def __setitem__(self, key, value):
val = super(TransparentContainer, self).__getitem__(key)
if isinstance(val, SplitValue) and not isinstance(value, SplitValue):
val.setval(value)
else:
super(TransparentContainer, self).__setitem__(key, value)
class TransparentList(TransparentContainer, list):
pass
class TransparentDict(TransparentContainer, dict):
pass
class DirectoryAwareFileProcessor(object):
def __init__(self, path, mode):
self._basedir = os.path.dirname(path)
self._file = open(path, mode)
def close(self):
try:
self._file.close()
finally:
self.dispose() # implemented by PyYAML
# __enter__ / __exit__ to use this in a `with` construct
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
self.close()
class FilesystemLoader(DirectoryAwareFileProcessor, Reader, Scanner,
Parser, Composer, SafeConstructor, Resolver):
"""Loads YAML file from a directory structure."""
def __init__(self, path):
DirectoryAwareFileProcessor.__init__(self, path, 'r')
Reader.__init__(self, self._file)
Scanner.__init__(self)
Parser.__init__(self)
Composer.__init__(self)
SafeConstructor.__init__(self)
Resolver.__init__(self)
def split_value_constructor(loader, node):
path = loader.construct_scalar(node)
with FilesystemLoader(os.path.join(loader._basedir, path)) as childLoader:
return SplitValue(childLoader.get_single_data(), path)
FilesystemLoader.add_constructor(u'!include', split_value_constructor)
def transp_dict_constructor(loader, node):
ret = TransparentDict()
ret.update(loader.construct_mapping(node, deep=True))
return ret
# override constructor for !!map, the default resolved tag for mappings
FilesystemLoader.add_constructor(yaml.resolver.BaseResolver.DEFAULT_MAPPING_TAG,
transp_dict_constructor)
def transp_list_constructor(loader, node):
ret = TransparentList()
ret.append(loader.construct_sequence(node, deep=True))
return ret
# like above, for !!seq
FilesystemLoader.add_constructor(yaml.resolver.BaseResolver.DEFAULT_SEQUENCE_TAG,
transp_list_constructor)
class FilesystemDumper(DirectoryAwareFileProcessor, Emitter,
Serializer, SafeRepresenter, Resolver):
def __init__(self, path):
DirectoryAwareFileProcessor.__init__(self, path, 'w')
Emitter.__init__(self, self._file)
Serializer.__init__(self)
SafeRepresenter.__init__(self)
Resolver.__init__(self)
self.__next_unique_name = 1
Serializer.open(self)
def gen_unique_name(self):
val = self.__next_unique_name
self.__next_unique_name = self.__next_unique_name + 1
return str(val)
def close(self):
try:
Serializer.close(self)
finally:
DirectoryAwareFileProcessor.close(self)
def split_value_representer(dumper, data):
if data._path is None:
if isinstance(data._content, TransparentContainer):
data._path = os.path.join(dumper.gen_unique_name(), "main.yaml")
else:
data._path = dumper.gen_unique_name() + ".yaml"
Path(os.path.dirname(data._path)).mkdir(exist_ok=True)
with FilesystemDumper(os.path.join(dumper._basedir, data._path)) as childDumper:
childDumper.represent(data._content)
return dumper.represent_scalar(u'!include', data._path)
yaml.add_representer(SplitValue, split_value_representer, FilesystemDumper)
def transp_dict_representer(dumper, data):
return dumper.represent_dict(data)
yaml.add_representer(TransparentDict, transp_dict_representer, FilesystemDumper)
def transp_list_representer(dumper, data):
return dumper.represent_list(data)
# example usage:
# explicitly specify values that should be split.
myData = TransparentDict({
"spam": SplitValue({
"spam": SplitValue("spam", "spam.yaml"),
"egg": SplitValue("sausage", "sausage.yaml")}, "spammap/main.yaml")})
with FilesystemDumper("root.yaml") as dumper:
dumper.represent(myData)
# load values from stored files.
# The loaded data remembers which values have been in which files.
with FilesystemLoader("root.yaml") as loader:
loaded = loader.get_single_data()
# modify a value as if it was a normal structure.
# actually updates a SplitValue
loaded["spam"]["spam"] = "baked beans"
# dumps the same structure as before, with the modified value.
with FilesystemDumper("root.yaml") as dumper:
dumper.represent(loaded)

Related

clearing browser cache in django [duplicate]

I'm working on some universal solution for problem with static files and updates in it.
Example: let's say there was site with /static/styles.css file - and site was used for a long time - so a lot of visitors cached this file in browser
Now we doing changes in this css file, and update on server, but some users still have old version (despite modification date returned by server)
The obvious solution is to add some version to file /static/styles.css?v=1.1 but in this case developer must track changes in this file and manually increase version
A second solution is to count the md5 hash of the file and add it to the url /static/styels.css/?v={mdp5hashvalue} which looks much better, but md5 should be calculated automatically somehow.
they possible way I see it - create some template tag like this
{% static_file "style.css" %}
which will render
<link src="/static/style.css?v=md5hash">
BUT, I do not want this tag to calculate md5 on every page load, and I do not want to store hash in django-cache, because then we will have to clear after updating file...
any thoughts ?
Django 1.4 now includes CachedStaticFilesStorage which does exactly what you need (well... almost).
Since Django 2.2 ManifestStaticFilesStorage should be used instead of CachedStaticFilesStorage.
You use it with the manage.py collectstatic task. All static files are collected from your applications, as usual, but this storage manager also creates a copy of each file with the MD5 hash appended to the name. So for example, say you have a css/styles.css file, it will also create something like css/styles.55e7cbb9ba48.css.
Of course, as you mentioned, the problem is that you don't want your views and templates calculating the MD5 hash all the time to find out the appropriate URLs to generate. The solution is caching. Ok, you asked for a solution without caching, I'm sorry, that's why I said almost. But there's no reason to reject caching, really. CachedStaticFilesStorage uses a specific cache named staticfiles. By default, it will use your existing cache system, and voilà! But if you don't want it to use your regular cache, perhaps because it's a distributed memcache and you want to avoid the overhead of network queries just to get static file names, then you can setup a specific RAM cache just for staticfiles. It's easier than it sounds: check out this excellent blog post. Here's what it would look like:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.PyLibMCCache',
'LOCATION': '127.0.0.1:11211',
},
'staticfiles': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'staticfiles-filehashes'
}
}
I would suggest using something like django-compressor. In addition to automatically handling this type of stuff for you, it will also automatically combine and minify your files for fast page load.
Even if you don't end up using it in entirety, you can inspect their code for guidance in setting up something similar. It's been better vetted than anything you'll ever get from a simple StackOverflow answer.
I use my own templatetag which add file modification date to url: https://bitbucket.org/ad3w/django-sstatic
Is reinventing the wheel and creating own implementation that bad? Furthermore I would like low level code (nginx for example) to serve my staticfiles in production instead of python application, even with backend. And one more thing: I'd like links stay the same after recalculation, so browser fetches only new files. So here's mine point of view:
template.html:
{% load md5url %}
<script src="{% md5url "example.js" %}"/>
out html:
static/example.js?v=5e52bfd3
settings.py:
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(PROJECT_DIR, 'static')
appname/templatetags/md5url.py:
import hashlib
import threading
from os import path
from django import template
from django.conf import settings
register = template.Library()
class UrlCache(object):
_md5_sum = {}
_lock = threading.Lock()
#classmethod
def get_md5(cls, file):
try:
return cls._md5_sum[file]
except KeyError:
with cls._lock:
try:
md5 = cls.calc_md5(path.join(settings.STATIC_ROOT, file))[:8]
value = '%s%s?v=%s' % (settings.STATIC_URL, file, md5)
except IsADirectoryError:
value = settings.STATIC_URL + file
cls._md5_sum[file] = value
return value
#classmethod
def calc_md5(cls, file_path):
with open(file_path, 'rb') as fh:
m = hashlib.md5()
while True:
data = fh.read(8192)
if not data:
break
m.update(data)
return m.hexdigest()
#register.simple_tag
def md5url(model_object):
return UrlCache.get_md5(model_object)
Note, to apply changes an uwsgi application (to be specific a process) should be restarted.
Django 1.7 added ManifestStaticFilesStorage, a better alternative to CachedStaticFilesStorage that doesn't use the cache system and solves the problem of the hash being computed at runtime.
Here is an excerpt from the documentation:
CachedStaticFilesStorage isn’t recommended – in almost all cases ManifestStaticFilesStorage is a better choice. There are several performance penalties when using CachedStaticFilesStorage since a cache miss requires hashing files at runtime. Remote file storage require several round-trips to hash a file on a cache miss, as several file accesses are required to ensure that the file hash is correct in the case of nested file paths.
To use it, simply add the following line to settings.py:
STATICFILES_STORAGE = 'django.contrib.staticfiles.storage.ManifestStaticFilesStorage'
And then, run python manage.py collectstatic; it will append the MD5 to the name of each static file.
How about you always have a URL Parameter in your URL with a version and whenever you have a major release you change the version in your URL Parameter. Even in the DNS. So if www.yourwebsite.com loads up www.yourwebsite.com/index.html?version=1.0 then after the major release the browser should load www.yourwebsite.com/index.html?version=2.0
I guess this is similar to your solution 1. Instead of tracking files can you track whole directories? For example ratehr than /static/style/css?v=2.0 can you do /static-2/style/css or to make it even granular /static/style/cssv2/.
There is an update for #deathangel908 code. Now it works well with S3 storage also (and with any other storage I think). The difference is using of static file storage for getting file content. Original doesn't work on S3.
appname/templatetags/md5url.py:
import hashlib
import threading
from django import template
from django.conf import settings
from django.contrib.staticfiles.storage import staticfiles_storage
register = template.Library()
class UrlCache(object):
_md5_sum = {}
_lock = threading.Lock()
#classmethod
def get_md5(cls, file):
try:
return cls._md5_sum[file]
except KeyError:
with cls._lock:
try:
md5 = cls.calc_md5(file)[:8]
value = '%s%s?v=%s' % (settings.STATIC_URL, file, md5)
except OSError:
value = settings.STATIC_URL + file
cls._md5_sum[file] = value
return value
#classmethod
def calc_md5(cls, file_path):
with staticfiles_storage.open(file_path, 'rb') as fh:
m = hashlib.md5()
while True:
data = fh.read(8192)
if not data:
break
m.update(data)
return m.hexdigest()
#register.simple_tag
def md5url(model_object):
return UrlCache.get_md5(model_object)
The major advantage of this solution: you dont have to modify anything in the templates.
This will add the build version into the STATIC_URL, and then the webserver will remove it with a Rewrite rule.
settings.py
# build version, it's increased with each build
VERSION_STAMP = __versionstr__.replace(".", "")
# rewrite static url to contain the number
STATIC_URL = '%sversion%s/' % (STATIC_URL, VERSION_STAMP)
So the final url would be for example this:
/static/version010/style.css
And then Nginx has a rule to rewrite it back to /static/style.css
location /static {
alias /var/www/website/static/;
rewrite ^(.*)/version([\.0-9]+)/(.*)$ $1/$3;
}
Simple templatetag vstatic that creates versioned static files urls that extends Django's behaviour:
from django.conf import settings
from django.contrib.staticfiles.templatetags.staticfiles import static
#register.simple_tag
def vstatic(path):
url = static(path)
static_version = getattr(settings, 'STATIC_VERSION', '')
if static_version:
url += '?v=' + static_version
return url
If you want to automatically set STATIC_VERSION to the current git commit hash, you can use the following snippet (Python3 code adjust if necessary):
import subprocess
def get_current_commit_hash():
try:
return subprocess.check_output(['git', 'rev-parse', '--short', 'HEAD']).strip().decode('utf-8')
except:
return ''
At settings.py call get_current_commit_hash(), so this will be calculated only once:
STATIC_VERSION = get_current_commit_hash()
I use a global base context in all my views, where I set the static version to be the millisecond time (that way, it will be a new version every time I restart my application):
# global base context
base_context = {
"title": settings.SITE_TITLE,
"static_version": int(round(time.time() * 1000)),
}
# function to merge context with base context
def context(items: Dict) -> Dict:
return {**base_context, **items}
# view
def view(request):
cxt = context({<...>})
return render(request, "page.html", cxt)
my page.html extends my base.html template, where I use it like this:
<link rel="stylesheet" type="text/css" href="{% static 'style.css' %}?v={{ static_version }}">
fairly simple and does the job

How can I process large files in Code Repositories?

I have a data feed that gives a large .txt file (50-75GB) every day. The file contains several different schemas within it, where each row corresponds to one schema. I would like to split this into partitioned datasets for each schema, how can I do this efficiently?
The largest problem you need to solve is the iteration speed to recover your schemas, which can be challenging for a file at this scale.
Your best tactic here will be to get an example 'notional' file with each of the schemas you want to recover as a line within it, and to add this as a file within your repository. When you add this file into your repo (alongside your transformation logic), you will then be able to push it into a dataframe, much as you would with the raw files in your dataset, for quick testing iteration.
First, make sure you specify txt files as a part of your package contents, this way your tests will discover them (this is covered in documentation under Read a file from a Python repository):
You can read other files from your repository into the transform context. This might be useful in setting parameters for your transform code to reference.
To start, In your python repository edit setup.py:
setup(
name=os.environ['PKG_NAME'],
# ...
package_data={
'': ['*.txt']
}
)
I am using a txt file with the following contents:
my_column, my_other_column
some_string,some_other_string
some_thing,some_other_thing,some_final_thing
This text file is at the following path in my repository: transforms-python/src/myproject/datasets/raw.txt
Once you have configured the text file to be shipped with your logic, and after you have included the file itself in your repository, you can then include the following code. This code has a couple of important functions:
It keeps raw file parsing logic completely separate from the stage of reading the file into a Spark DataFrame. This is so that the way this DataFrame is constructed can be left to the test infrastructure, or to the run time, depending on where you are running.
This keeping of the logic separate lets you ensure the actual row-by-row parsing you want to do is its own testable function, instead of having it live purely within your my_compute_function
This code uses the Spark-native spark_session.read.text method, which will be orders of magnitude faster than row-by-row parsing of a raw txt file. This will ensure the parallelized DataFrame is what you operate on, not a single file, line by line, inside your executors (or worse, your driver).
from transforms.api import transform, Input, Output
from pkg_resources import resource_filename
def raw_parsing_logic(raw_df):
return raw_df
#transform(
my_output=Output("/txt_tests/parsed_files"),
my_input=Input("/txt_tests/dataset_of_files"),
)
def my_compute_function(my_input, my_output, ctx):
all_files_df = None
for file_status in my_input.filesystem().ls('**/**'):
raw_df = ctx.spark_session.read.text(my_input.filesystem().hadoop_path + "/" + file_status.path)
parsed_df = raw_parsing_logic(raw_df)
all_files_df = parsed_df if all_files_df is None else all_files_df.unionByName(parsed_df)
my_output.write_dataframe(all_files_df)
def test_my_compute_function(spark_session):
file_path = resource_filename(__name__, "raw.txt")
raw_df = raw_parsing_logic(
spark_session.read.text(file_path)
)
assert raw_df.count() > 0
raw_columns_set = set(raw_df.columns)
expected_columns_set = {"value"}
assert len(raw_columns_set.intersection(expected_columns_set)) == 1
Once you have this code up and running, your test_my_compute_function method will be very fast to iterate on, so that you can perfect your schema recovery logic. This will make it substantially easier to get your dataset building at the very end, but without any of the overhead of a full build.

How to add w:altChunk and its relationship with python-docx

I have a use case that make use of <w:altChunk/> element in Word document by inject (fragment of) HTML file as alternate chunks and let Word do it works when the file gets opened. The current implementation was using XML/XSL to compose WordML XML, modify relationships, and do all packaging stuffs manually which is a real pain.
I wanted to move to python-docx but the API doesn't support this directly. Currently I found a way to add the <w:altChunk/> in the document XML. But still struggle to find a way to add relationship and related file to the package.
I think I should make a compatible part and pass it to document.part.relate_to function to do its job. But still can't figure how to:
from docx import Document
from docx.oxml import OxmlElement, qn
from docx.opc.constants import RELATIONSHIP_TYPE as RT
def add_alt_chunk(doc: Document, chunk_part):
''' TODO: figuring how to add files and relationships'''
r_id = doc.part.relate_to(chunk_part, RT.A_F_CHUNK)
alt = OxmlElement('w:altChunk')
alt.set(qn('r:id'), r_id)
doc.element.body.sectPr.addprevious(alt)
Update:
As per scanny's advice, below is my working code. Thank you very much Steve!
from docx import Document
from docx.oxml import OxmlElement
from docx.oxml.ns import qn
from docx.opc.part import Part
from docx.opc.constants import RELATIONSHIP_TYPE as RT
def add_alt_chunk(doc: Document, html: str):
package = doc.part.package
partname = package.next_partname('/word/altChunk%d.html')
alt_part = Part(partname, 'text/html', html.encode(), package)
r_id = doc.part.relate_to(alt_part, RT.A_F_CHUNK)
alt_chunk = OxmlElement('w:altChunk')
alt_chunk.set(qn('r:id'), r_id)
doc.element.body.sectPr.addprevious(alt_chunk)
doc = Document()
doc.add_paragraph('Hello')
add_alt_chunk(doc, "<body><strong>I'm an altChunk</strong></body>")
doc.add_paragraph('Have a nice day!')
doc.save('test.docx')
Note: the altChunk parts only work/appear when document is open using MS Word
Well, some hints here anyway. Maybe you can post your working code at the end as a full "answer":
The alt-chunk part needs to start its life as a docx.opc.part.Part object.
The blob argument should be the bytes of the file, which is often but not always plain text. It must be bytes though, not unicode (characters), so any encoding has to happen before calling Part().
I expect you can work out the other arguments:
package is the overall OPC package, available on document.part.package.
You can use docx.opc.package.OpcPackage.next_partname() to get an available partname based on a root template like: "altChunk%s" for a name like "altChunk3". Check what partname prefix Word uses for these, possibly with unzip -l has-an-alt-chunk.docx; should be easy to spot.
The content-type is one in docx.opc.constants.CONTENT_TYPE. Check the [Content_Types].xml part in a .docx file that has an altChunk to see what they use.
Once formed, the document_part.relate_to() method will create the proper relationship. If there is more than one relationship (not common) then you need to create each one separately. There would only be one relationship from a particular part, just some parts are related to more than one other part. Check the relationships in an existing .docx to see, but pretty good guess it's only the one in this case.
So your code would look something like:
package = document.part.package
partname = package.next_partname("altChunkySomethingPrefix")
content_type = docx.opc.constants.CONTENT_TYPE.THE_RIGHT_MIME_TYPE
blob = make_the_altChunk_file_bytes()
alt_chunk_part = Part(partname, content_type, blob, package)
rId = document.part.relate_to(alt_chunk_part, RT.A_F_CHUNK)
etc.

SQLAlchemy automap: Best practices for performance

I'm building a python app around an existing (mysql) database and am using automap to infer tables and relationships:
base = automap_base()
self.engine = create_engine(
'mysql://%s:%s#%s/%s?charset=utf8mb4' % (
config.DB_USER, config.DB_PASSWD, config.DB_HOST, config.DB_NAME
), echo=False
)
# reflect the tables
base.prepare(self.engine, reflect=True)
self.TableName = base.classes.table_name
Using this I can do things like session.query(TableName) etc...
However, I'm worried about performance, because every time the app runs it will do the whole inference again.
Is this a legitimate concern?
If so, is there a possibility to 'cache' the output of Automap?
I think that "reflecting" the structure of your database is not the way to go. Unless your app tries to "infer" things from the structure, like static code analysis would for source files, then it is unnecessary. The other reason for reflecting it at run-time would be the reduced time to begin "using" the database using SQLAlchemy. However:
Another option would be to use something like SQLACodegen (https://pypi.python.org/pypi/sqlacodegen):
It will "reflect" your database once and create a 99.5% accurate set of declarative SQLAlchemy models for you to work with. However, this does require that you keep the model subsequently in-sync with the structure of the database. I would assume that this is not a big concern seeing as the tables you're already-working with are stable-enough such that run-time reflection of their structure does not impact your program much.
Generating the declarative models is essentially a "cache" of the reflection. It's just that SQLACodegen saved it into a very readable set of classes + fields instead of data in-memory. Even with a changing structure, and my own "changes" to the generated declarative models, I still use SQLACodegen later-on in a project whenever I make database changes. It means that my models are generally consistent amongst one-another and that I don't have things such as typos and data-mismatches due to copy-pasting.
Performance can be a legitimate concern. If the database schema is not changing, it can be time consuming to reflect the database every time a script is run. This is more of an issue during development, not starting up a long running application. It's also a significant time saver if your database is on a remote server (again, particularly during development).
I use code that is similar to the answer here (as noted by #ACV). The general plan is to perform the reflection the first time, then pickle the metadata object. The next time the script is run, it will look for the pickle file and use that. The file can be anywhere, but I place mine in ~/.sqlalchemy_cache. This is an example based on your code.
import os
from sqlalchemy.ext.declarative import declarative_base
self.engine = create_engine(
'mysql://%s:%s#%s/%s?charset=utf8mb4' % (
config.DB_USER, config.DB_PASSWD, config.DB_HOST, config.DB_NAME
), echo=False
)
metadata_pickle_filename = "mydb_metadata"
cache_path = os.path.join(os.path.expanduser("~"), ".sqlalchemy_cache")
cached_metadata = None
if os.path.exists(cache_path):
try:
with open(os.path.join(cache_path, metadata_pickle_filename), 'rb') as cache_file:
cached_metadata = pickle.load(file=cache_file)
except IOError:
# cache file not found - no problem, reflect as usual
pass
if cached_metadata:
base = declarative_base(bind=self.engine, metadata=cached_metadata)
else:
base = automap_base()
base.prepare(self.engine, reflect=True) # reflect the tables
# save the metadata for future runs
try:
if not os.path.exists(cache_path):
os.makedirs(cache_path)
# make sure to open in binary mode - we're writing bytes, not str
with open(os.path.join(cache_path, metadata_pickle_filename), 'wb') as cache_file:
pickle.dump(Base.metadata, cache_file)
except:
# couldn't write the file for some reason
pass
self.TableName = base.classes.table_name
For anyone using declarative table class definitions, assuming a Base object defined as e.g.
Base = declarative_base(bind=engine)
metadata_pickle_filename = "ModelClasses_trilliandb_trillian.pickle"
# ------------------------------------------
# Load the cached metadata if it's available
# ------------------------------------------
# NOTE: delete the cached file if the database schema changes!!
cache_path = os.path.join(os.path.expanduser("~"), ".sqlalchemy_cache")
cached_metadata = None
if os.path.exists(cache_path):
try:
with open(os.path.join(cache_path, metadata_pickle_filename), 'rb') as cache_file:
cached_metadata = pickle.load(file=cache_file)
except IOError:
# cache file not found - no problem
pass
# ------------------------------------------
# define all tables
#
class MyTable(Base):
if cached_metadata:
__table__ = cached_metadata.tables['my_schema.my_table']
else:
__tablename__ = 'my_table'
__table_args__ = {'autoload':True, 'schema':'my_schema'}
...
# ----------------------------------------
# If no cached metadata was found, save it
# ----------------------------------------
if cached_metadata is None:
# cache the metadata for future loading
# - MUST DELETE IF THE DATABASE SCHEMA HAS CHANGED
try:
if not os.path.exists(cache_path):
os.makedirs(cache_path)
# make sure to open in binary mode - we're writing bytes, not str
with open(os.path.join(cache_path, metadata_pickle_filename), 'wb') as cache_file:
pickle.dump(Base.metadata, cache_file)
except:
# couldn't write the file for some reason
pass
Important Note!! If the database schema changes, you must delete the cached file to force the code to autoload and create a new cache. If you don't, the changes will be be reflected in the code. It's an easy thing to forget.
The answer to your first question is largely subjective. You are adding database queries to fetch the reflection metadata to the application load time. Whether or not that overhead is significant depends on your project requirements.
For reference, I have an internal tool at work that uses a reflection pattern because the the load-time is acceptable for our team. That might not be the case if it were an externally-facing product. My hunch is that for most applications the reflection overhead will not dominate the total application load time.
If you decide it is significant for your purposes, this question has an interesting answer where the user pickles the database metadata in order to locally cache it.
Adding to this, what #Demitri answered is close to correct but (at least in sqlalchemy 1.4.29), the example will fail on the last line self.TableName = base.classes.table_name when generating from the cached file. In this case declarative_base() has no attribute classes.
To fix is as simple as altering:
if cached_metadata:
base = declarative_base(bind=self.engine, metadata=cached_metadata)
else:
base = automap_base()
base.prepare(self.engine, reflect=True) # reflect the tables
to
if cached_metadata:
base = automap_base(declarative_base(bind=self.engine, metadata=cached_metadata))
base.prepare()
else:
base = automap_base()
base.prepare(self.engine, reflect=True) # reflect the tables
This will create your automap object with the proper attributes.

Naming variables that contain filenames?

If I've got a variable that contains the fully-qualified name of a file (for example, a project file), should it be called projectFile, projectFileName or projectPath? Or something else?
I usually go with these:
FileName for the file name only (without path)
FilePath for the parent path only (without the file name)
FileFullName for the fully qualified name with path
I don't think there is such a thing as an accepted standard for this. I depends on your (team's) preferences and whether you need to differentiate between the three in a given situation.
EDIT: My thoughts on these particular naming conventions are these:
intuitively, a "Name" is a string, so is a "Path" (and a "FileName")
a "Name" is relative unless it is a "FullName"
related variable names should begin with the same prefix ("File" + ...), I think this improves readability
concretions/properties are right-branching: "File" -> "FileName"
specializations are left-branching: "FileName" -> "ProjectFileName" (or "ProjectFileFullName")
a "File" is an object/handle representing a physical object, so "ProjectFile" cannot be a string
I cannot always stick to these conventions, but I try to. If I decided to use a particular naming pattern I am consistent even if it means that I have to write more descriptive (= longer) variable names. Code is more often read than written, so the little extra typing doesn't bother me too much.
System.IO.Path seems to refer to the fully qualified name of the file as the path, the name of the file itself as filename, and its containing directory as directory. I would suggest that in your case projectPath is most in keeping with this nomenclature if you are using in the context of System.IO.Path. I would then refer to the name of the file as fileName and it's containing directory as parentDirectory.
I don't think there's a consensus on this, just try to be consistent.
Examples from the .NET Framework:
FileStream(string path...);
Assembly.LoadFrom(string assemblyFile)
XmlDocument.Load(string filename)
Even the casing of filename (filename / fileName) is inconsistent in the framework, e.g.:
FileInfo.CopyTo(string destFileName)
You should title it after the file it is likely to contain, ie:
system_configuration_file_URI
user_input_file_URI
document_template_file_URI
"file" or "filename" is otherwise mostly useless
Also, "file" could mean "I am a file point" which is ambiguous, "filename" doesn't state whether or not it has context ( ie: directories ), fileURI is contextwise unambiguous as it says "this is a resource identifier that when observed, points to a resource ( a file ) "
Some thoughts:
projectFile might not be a string -- it might be an object that represents the parsed contents of the file.
projectFileName might not be fully-qualified. If the file is actually "D:\Projects\MyFile.csproj", then this might be expected to contain "MyFile.csproj".
projectPath might be considered to be the fully-qualified path to the file, or it might be considered to be the name of the parent folder containing the file.
projectFolder might be considered to hold the name of the parent folder, or it might actually be an implementation of some kind of Folder abstraction in the code.
.NET sometimes uses path to refer to a file, and sometimes uses filename.
Notice that “filename” is one word in English! There's no need to capitalize the “n” in the middle of the identifier.
That said, I append Filename to all my string variables that contain filenames. However, I try to avoid this whole scenario in favour of using strongly typed variables in languages that support a type of files and directories. After all, this is what extensible type systems are there for.
In strongly typed languages, the need for the descriptive postfix is then often unnecessary (especially in function arguments) because variable type and usage infers its content.
All of this depends partly on the size of the method and if the variables are class variables.
If they are class variables or in a large complicated method, then follow Kent Fredric's advice and name them something that indicates what the file is used for, i.e. "projectFileName".
If this is a small utility method that, say, deletes a file, then don't name it "projectFileName". Call it simply "filename" then.
I would never name it "path", since that implies that it's referring to the folder that it's in.
Calling it "file" would be OK, if there were not also another variable, like "fileID" or "filePtr".
So, I would use "folder" or "path" to identify the directory that the file is in.
And "fileID" to represent a file object.
And finaly, "filename" for the actual name of the file.
Happy Coding,
Randy
Based on the above answers of the choices being ambiguous as to their meaning and there not being a widely accepted name, if this is a .NET method requesting a file location then specify in the XML comments what you want, along with an example, as another developer can refer to that if they are unsure of what you want.
It depends on naming conventions used in your environment e.g. in Python the answer would be projectpath due to naming conventions of os.path module.
>>> import os.path
>>> path = "/path/to/tmp.txt"
>>> os.path.abspath(path)
'c:\\path\\to\\tmp.txt'
>>> os.path.split(path)
('/path/to', 'tmp.txt')
>>> os.path.dirname(path)
'/path/to'
>>> os.path.basename(path)
'tmp.txt'
>>> os.path.splitext(_)
('tmp', '.txt')