Mercurial hook for pretag - mercurial

I want to force a policy (based on a regex) on tag names while committing the code to mercurial. After searching a lot I found the below sample.
version_re = r'(ver-\d+\.\d+\.\d+|tip)$'
def invalidtag(ui, repo, hooktype, node, **kwargs):
assert(hooktype == 'pretag')
....
if not re_.match(tag):
ui.warn('Invalid tag name "%s".\n' % tag)
return True
return False
My question is, how can I get the tag name from repo which is being committed.

According to this documentation, the tag name is passed in along with the hook so I think that changing your hook definition to this might work:
def invalidtag(ui, repo, hooktype, node, tag, **kwargs):

Related

Implementing a glossary in Jekyll

Am porting a doc site from Madcap Flare to GitHub Pages using Jekyll. One thing I'm wondering about is that is there a way of implementing an automatic glossary similar to Flare's:
https://www.madcapsoftware.com/blog/guest-post-madcap-flare-101-7-the-glossary/
In summary: I want to have a glossary file comprised of terms and definitions. Then if a term occurs in a topic, it automatically becomes a link or pop-up link to the definition.
If anyone has done this or knows a way to do it I would appreciate it.
Yes, this is possible. Pop-over elements are also possible but I have chosen the link example.
This plugin can create links to the glossary:
module GlossaryData
def glossary
self['glossary']
end
end
module Jekyll
class GlossaryLinkGenerator < Generator
def generate(site)
site.data.extend(GlossaryData)
glossary_terms = {}
site.data.glossary.each do |entry|
glossary_terms[entry['term'].downcase] = entry['definition']
glossary_terms[entry['term']] = entry['definition']
end
# pages
site.pages.each do |page|
glossary_terms.each do |term, definition|
page.content = page.content.gsub(term) do |match|
"<a href='/glossary.html##{term}'>#{term}</a>"
end
end
end
# posts
site.posts.docs.each do |post|
glossary_terms.each do |term, definition|
post.content = post.content.gsub(term) do |match|
"<a href='/glossary.html##{term}'>#{term}</a>"
end
end
end
# Notes
# For all content, use site.documents instead of the two code blocks for posts and pages above
# The site.documents attribute is an array of all the documents in your site,
# including pages, posts, and any other collections that you have defined in your configuration.
# You can use this attribute if you want to apply the glossary link generator to all types of documents in your site.
# You can also use site.collections[COLLECTION_NAME].docs if you want to apply it to a specific collection of documents.
# Just replace COLLECTION_NAME with the name of the collection you want to use.
# You can also use site.collections['posts'].docs for the posts instead of site.posts.
# site.collections is a hash of all the collections defined in your site's configuration,
# and each collection has a 'docs' attribute that returns an array of all the documents in the collection.
# This is useful if you want to apply the glossary link generator to a collection other than 'posts',
# or if you want to apply it to multiple collections.
end
end
end
Define your glossary in _data/glossary.yml:
- term: Jekyll
definition: A static site generator written in Ruby.
- term: Liquid
definition: A template language used by Jekyll to generate dynamic content.
- term: Markdown
definition: A lightweight markup language used to format plain text.
The test page to see that the glossary is working:
---
layout: post
---
jekyll or Jekyll? Both are matched due to downcasing.

clearing browser cache in django [duplicate]

I'm working on some universal solution for problem with static files and updates in it.
Example: let's say there was site with /static/styles.css file - and site was used for a long time - so a lot of visitors cached this file in browser
Now we doing changes in this css file, and update on server, but some users still have old version (despite modification date returned by server)
The obvious solution is to add some version to file /static/styles.css?v=1.1 but in this case developer must track changes in this file and manually increase version
A second solution is to count the md5 hash of the file and add it to the url /static/styels.css/?v={mdp5hashvalue} which looks much better, but md5 should be calculated automatically somehow.
they possible way I see it - create some template tag like this
{% static_file "style.css" %}
which will render
<link src="/static/style.css?v=md5hash">
BUT, I do not want this tag to calculate md5 on every page load, and I do not want to store hash in django-cache, because then we will have to clear after updating file...
any thoughts ?
Django 1.4 now includes CachedStaticFilesStorage which does exactly what you need (well... almost).
Since Django 2.2 ManifestStaticFilesStorage should be used instead of CachedStaticFilesStorage.
You use it with the manage.py collectstatic task. All static files are collected from your applications, as usual, but this storage manager also creates a copy of each file with the MD5 hash appended to the name. So for example, say you have a css/styles.css file, it will also create something like css/styles.55e7cbb9ba48.css.
Of course, as you mentioned, the problem is that you don't want your views and templates calculating the MD5 hash all the time to find out the appropriate URLs to generate. The solution is caching. Ok, you asked for a solution without caching, I'm sorry, that's why I said almost. But there's no reason to reject caching, really. CachedStaticFilesStorage uses a specific cache named staticfiles. By default, it will use your existing cache system, and voilà! But if you don't want it to use your regular cache, perhaps because it's a distributed memcache and you want to avoid the overhead of network queries just to get static file names, then you can setup a specific RAM cache just for staticfiles. It's easier than it sounds: check out this excellent blog post. Here's what it would look like:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.PyLibMCCache',
'LOCATION': '127.0.0.1:11211',
},
'staticfiles': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'staticfiles-filehashes'
}
}
I would suggest using something like django-compressor. In addition to automatically handling this type of stuff for you, it will also automatically combine and minify your files for fast page load.
Even if you don't end up using it in entirety, you can inspect their code for guidance in setting up something similar. It's been better vetted than anything you'll ever get from a simple StackOverflow answer.
I use my own templatetag which add file modification date to url: https://bitbucket.org/ad3w/django-sstatic
Is reinventing the wheel and creating own implementation that bad? Furthermore I would like low level code (nginx for example) to serve my staticfiles in production instead of python application, even with backend. And one more thing: I'd like links stay the same after recalculation, so browser fetches only new files. So here's mine point of view:
template.html:
{% load md5url %}
<script src="{% md5url "example.js" %}"/>
out html:
static/example.js?v=5e52bfd3
settings.py:
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(PROJECT_DIR, 'static')
appname/templatetags/md5url.py:
import hashlib
import threading
from os import path
from django import template
from django.conf import settings
register = template.Library()
class UrlCache(object):
_md5_sum = {}
_lock = threading.Lock()
#classmethod
def get_md5(cls, file):
try:
return cls._md5_sum[file]
except KeyError:
with cls._lock:
try:
md5 = cls.calc_md5(path.join(settings.STATIC_ROOT, file))[:8]
value = '%s%s?v=%s' % (settings.STATIC_URL, file, md5)
except IsADirectoryError:
value = settings.STATIC_URL + file
cls._md5_sum[file] = value
return value
#classmethod
def calc_md5(cls, file_path):
with open(file_path, 'rb') as fh:
m = hashlib.md5()
while True:
data = fh.read(8192)
if not data:
break
m.update(data)
return m.hexdigest()
#register.simple_tag
def md5url(model_object):
return UrlCache.get_md5(model_object)
Note, to apply changes an uwsgi application (to be specific a process) should be restarted.
Django 1.7 added ManifestStaticFilesStorage, a better alternative to CachedStaticFilesStorage that doesn't use the cache system and solves the problem of the hash being computed at runtime.
Here is an excerpt from the documentation:
CachedStaticFilesStorage isn’t recommended – in almost all cases ManifestStaticFilesStorage is a better choice. There are several performance penalties when using CachedStaticFilesStorage since a cache miss requires hashing files at runtime. Remote file storage require several round-trips to hash a file on a cache miss, as several file accesses are required to ensure that the file hash is correct in the case of nested file paths.
To use it, simply add the following line to settings.py:
STATICFILES_STORAGE = 'django.contrib.staticfiles.storage.ManifestStaticFilesStorage'
And then, run python manage.py collectstatic; it will append the MD5 to the name of each static file.
How about you always have a URL Parameter in your URL with a version and whenever you have a major release you change the version in your URL Parameter. Even in the DNS. So if www.yourwebsite.com loads up www.yourwebsite.com/index.html?version=1.0 then after the major release the browser should load www.yourwebsite.com/index.html?version=2.0
I guess this is similar to your solution 1. Instead of tracking files can you track whole directories? For example ratehr than /static/style/css?v=2.0 can you do /static-2/style/css or to make it even granular /static/style/cssv2/.
There is an update for #deathangel908 code. Now it works well with S3 storage also (and with any other storage I think). The difference is using of static file storage for getting file content. Original doesn't work on S3.
appname/templatetags/md5url.py:
import hashlib
import threading
from django import template
from django.conf import settings
from django.contrib.staticfiles.storage import staticfiles_storage
register = template.Library()
class UrlCache(object):
_md5_sum = {}
_lock = threading.Lock()
#classmethod
def get_md5(cls, file):
try:
return cls._md5_sum[file]
except KeyError:
with cls._lock:
try:
md5 = cls.calc_md5(file)[:8]
value = '%s%s?v=%s' % (settings.STATIC_URL, file, md5)
except OSError:
value = settings.STATIC_URL + file
cls._md5_sum[file] = value
return value
#classmethod
def calc_md5(cls, file_path):
with staticfiles_storage.open(file_path, 'rb') as fh:
m = hashlib.md5()
while True:
data = fh.read(8192)
if not data:
break
m.update(data)
return m.hexdigest()
#register.simple_tag
def md5url(model_object):
return UrlCache.get_md5(model_object)
The major advantage of this solution: you dont have to modify anything in the templates.
This will add the build version into the STATIC_URL, and then the webserver will remove it with a Rewrite rule.
settings.py
# build version, it's increased with each build
VERSION_STAMP = __versionstr__.replace(".", "")
# rewrite static url to contain the number
STATIC_URL = '%sversion%s/' % (STATIC_URL, VERSION_STAMP)
So the final url would be for example this:
/static/version010/style.css
And then Nginx has a rule to rewrite it back to /static/style.css
location /static {
alias /var/www/website/static/;
rewrite ^(.*)/version([\.0-9]+)/(.*)$ $1/$3;
}
Simple templatetag vstatic that creates versioned static files urls that extends Django's behaviour:
from django.conf import settings
from django.contrib.staticfiles.templatetags.staticfiles import static
#register.simple_tag
def vstatic(path):
url = static(path)
static_version = getattr(settings, 'STATIC_VERSION', '')
if static_version:
url += '?v=' + static_version
return url
If you want to automatically set STATIC_VERSION to the current git commit hash, you can use the following snippet (Python3 code adjust if necessary):
import subprocess
def get_current_commit_hash():
try:
return subprocess.check_output(['git', 'rev-parse', '--short', 'HEAD']).strip().decode('utf-8')
except:
return ''
At settings.py call get_current_commit_hash(), so this will be calculated only once:
STATIC_VERSION = get_current_commit_hash()
I use a global base context in all my views, where I set the static version to be the millisecond time (that way, it will be a new version every time I restart my application):
# global base context
base_context = {
"title": settings.SITE_TITLE,
"static_version": int(round(time.time() * 1000)),
}
# function to merge context with base context
def context(items: Dict) -> Dict:
return {**base_context, **items}
# view
def view(request):
cxt = context({<...>})
return render(request, "page.html", cxt)
my page.html extends my base.html template, where I use it like this:
<link rel="stylesheet" type="text/css" href="{% static 'style.css' %}?v={{ static_version }}">
fairly simple and does the job

Store json-like hierarchical data as nested directory tree?

TLDR
I am looking for an existing convention to encode / serialize tree-like data in a directory structure, split into small files instead of one big file.
Background
There are different scenarios where we want to store tree-like data in a file, which can then be tracked in git. Json files can express dependencies for a package manager (e.g. composer for php, npm for node.js). Yml files can define routes, test cases, etc.
Typically a "tree structure" is a combination of key-value lists and "serial" lists, where each value can again be a tree structure.
Very often the order of associative keys is irrelevant, and should ideally be normalized to alphabetic order.
One problem when storing a big tree structure in a single file, be it json or yml, which is then tracked with git, is that you get plenty of merge conflicts if different branches add and remove entries in the same key-value list.
Especially for key-value lists where the order is irrelevant, it would be more git-friendly to store each sub-tree in a separate file or directory, instead of storing them all in one big file.
Technically it should be possible to create a directory structure that is as expressive as json or yml.
Performance concerns can be overcome with caching. If the files are going to be tracked in git, we can assume they are going to be unchanged most of the time.
The main challenges:
- How to deal with "special characters" that cause problems in some or most file systems, if used in a file or directory name?
- If I need to encode or disambiguate special characters, how can I still keep it pleasant to the eye?
- How to deal with limitations to file name length in some file systems?
- How to deal with other file system quirks, e.g. case insensitivity? Is this even still a thing?
- How to express serial lists, which might contain key-value lists as children? Serial lists cannot be expressed as directories, so its children have to live within the same file.
- How can I avoid to reinvent the wheel, creating my own made-up "convention" that nobody else uses?
Desired features:
- As expressive as json or yml.
- git-friendly.
- Machine-readable and -writable.
- Human-readable and -editable, perhaps with limitations.
- Ideally it should use known formats (json, yml) for structures and values that are expressed within a single file.
Naive approach
Of course the first idea would be to use yml files for literal values and serial lists, and directories for key-value lists (in cases where the order does not matter). In a key-value list, the file or directory names are interpreted as keys, the files and subdirectories as values.
This has some limitations, because not every possible key that would be valid in json or yml is also a valid file name in every file system. The most obvious example would be a slash.
Question
I have different ideas how I would do this myself.
But I am really looking for some kind of convention for this that already exists.
Related questions
Persistence: Data Trees stored as Directory Trees
This is asking about performance, and about using the filesystem like a database - I think.
I am less interested in performance (caching makes it irrelevant), and more about the actual storage format / convention.
The closest thing I can think of that could be seen as some kind of convention for doing this are Linux configuration files. In modern Linux, you often split the configuration of a service into multiple files residing in a certain directory, e.g. /etc/exim4/conf.d/ instead of having a single file /etc/exim/exim4.conf. There are multiple reasons doing this:
Some configuration may be provided by the package manager (e.g. linking to other services that are installed via package manager), while other parts are user-defined. Since there would be a conflict if the user edits a file provided by the package manager, they can instead create a new file and enter additional configuration there.
For large configuration files (like for exim4), it's easier to navigate the configuration if you have multiple files for different concerns (hardcore vim users might disagree).
You can enable / disable parts of the configuration easier by renaming / moving the file that contains a certain part.
We can learn a bit from this: Separation into distinct files should happen if the semantic of the content is orthogonal, i.e. the semantic of one file does not depend on the semantic of another file. This is of course a rule for sibling files; we cannot really deduct rules for serializing a tree structure as directory tree from it. However, we can definitely see reasons for not splitting every value in an own file.
You mention problems of encoding special characters into a file name. You will only have this problem if you go against conventions! The implicit convention on file and directory names is that they act as locator / ID for files, never as content. Again, we can learn a bit from Linux config files: Usually, there is a master file that contains an include statement which loads all the split files. The include statement gives a path glob expression which locates the other files. The path to those files is irrelevant for the semantics of their content. Technically, we can do something similar with YAML.
Assume we want to split this single YAML file into multiple files (pardon my lack of creativity):
spam:
spam: spam
egg: sausage
baked beans:
- spam
- spam
- bacon
A possible transformation would be this (read stuff ending with / as directory, : starts file content):
confdir/
main.yaml:
spam: !include spammap/main.yaml
baked beans: !include beans/
spammap/
main.yaml:
spam: !include spam.yaml
egg: !include egg.yaml
spam.yaml:
spam
egg.yaml:
sausage
beans/
1.yaml:
spam
2.yaml:
spam
3.yaml:
bacon
(In YAML, !include is a local tag. With most implementations, you can register a custom constructor for it, thus loading the whole hierarchy as single document.)
As you can see, I put every hierarchy level and every value into a separate file. I use two kinds of includes: A reference to a file will load the content of that file; a reference to a directory will generate a sequence where each item's value is the content of one file in that directory, sorted by file name. As you can see, the file and directory names are never part of the content, sometimes I opted to name them differently (e.g. baked beans -> beans/) to avoid possible file system problems (spaces in filenames in this case – usually not a serious problem nowadays). Also, I adhere to the filename extension convention (having the files carry .yaml). This would be more quirky if you put content into the file names.
I named the starting file on each level main.yaml (not needed in beans/ since it's a sequence). While the exact name is arbitrary, this is a convention used in several other tools, e.g. Python with __init__.py or the Nix package manager with default.nix. Then I placed additional files or directories besides this main file.
Since including other files is explicit, it is not a problem with this approach to put a larger part of the content into a single file. Note that JSON lacks YAML's tags functionality, but you can still walk through a loaded JSON file and preprocess values like {"!include": "path"}.
To sum up: While there is not directly a convention how to do what you want, parts of the problem have been solved at different places and you can inherit wisdom from that.
Here's a minimal working example of how to do it with PyYAML. This is just a proof of concept; several features are missing (e.g. autogenerated file names will be ascending numbers, no support for serializing lists into directories). It shows what needs to be done to store information about the data layout while being transparent to the user (data can be accessed like a normal dict structure). It remembers file names stuff has been loaded from and stores to those files again.
import os.path
from pathlib import Path
import yaml
from yaml.reader import Reader
from yaml.scanner import Scanner
from yaml.parser import Parser
from yaml.composer import Composer
from yaml.constructor import SafeConstructor
from yaml.resolver import Resolver
from yaml.emitter import Emitter
from yaml.serializer import Serializer
from yaml.representer import SafeRepresenter
class SplitValue(object):
"""This is a value that should be written into its own YAML file."""
def __init__(self, content, path = None):
self._content = content
self._path = path
def getval(self):
return self._content
def setval(self, value):
self._content = value
def __repr__(self):
return self._content.__repr__()
class TransparentContainer(object):
"""Makes SplitValues transparent to the user."""
def __getitem__(self, key):
val = super(TransparentContainer, self).__getitem__(key)
return val.getval() if isinstance(val, SplitValue) else val
def __setitem__(self, key, value):
val = super(TransparentContainer, self).__getitem__(key)
if isinstance(val, SplitValue) and not isinstance(value, SplitValue):
val.setval(value)
else:
super(TransparentContainer, self).__setitem__(key, value)
class TransparentList(TransparentContainer, list):
pass
class TransparentDict(TransparentContainer, dict):
pass
class DirectoryAwareFileProcessor(object):
def __init__(self, path, mode):
self._basedir = os.path.dirname(path)
self._file = open(path, mode)
def close(self):
try:
self._file.close()
finally:
self.dispose() # implemented by PyYAML
# __enter__ / __exit__ to use this in a `with` construct
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
self.close()
class FilesystemLoader(DirectoryAwareFileProcessor, Reader, Scanner,
Parser, Composer, SafeConstructor, Resolver):
"""Loads YAML file from a directory structure."""
def __init__(self, path):
DirectoryAwareFileProcessor.__init__(self, path, 'r')
Reader.__init__(self, self._file)
Scanner.__init__(self)
Parser.__init__(self)
Composer.__init__(self)
SafeConstructor.__init__(self)
Resolver.__init__(self)
def split_value_constructor(loader, node):
path = loader.construct_scalar(node)
with FilesystemLoader(os.path.join(loader._basedir, path)) as childLoader:
return SplitValue(childLoader.get_single_data(), path)
FilesystemLoader.add_constructor(u'!include', split_value_constructor)
def transp_dict_constructor(loader, node):
ret = TransparentDict()
ret.update(loader.construct_mapping(node, deep=True))
return ret
# override constructor for !!map, the default resolved tag for mappings
FilesystemLoader.add_constructor(yaml.resolver.BaseResolver.DEFAULT_MAPPING_TAG,
transp_dict_constructor)
def transp_list_constructor(loader, node):
ret = TransparentList()
ret.append(loader.construct_sequence(node, deep=True))
return ret
# like above, for !!seq
FilesystemLoader.add_constructor(yaml.resolver.BaseResolver.DEFAULT_SEQUENCE_TAG,
transp_list_constructor)
class FilesystemDumper(DirectoryAwareFileProcessor, Emitter,
Serializer, SafeRepresenter, Resolver):
def __init__(self, path):
DirectoryAwareFileProcessor.__init__(self, path, 'w')
Emitter.__init__(self, self._file)
Serializer.__init__(self)
SafeRepresenter.__init__(self)
Resolver.__init__(self)
self.__next_unique_name = 1
Serializer.open(self)
def gen_unique_name(self):
val = self.__next_unique_name
self.__next_unique_name = self.__next_unique_name + 1
return str(val)
def close(self):
try:
Serializer.close(self)
finally:
DirectoryAwareFileProcessor.close(self)
def split_value_representer(dumper, data):
if data._path is None:
if isinstance(data._content, TransparentContainer):
data._path = os.path.join(dumper.gen_unique_name(), "main.yaml")
else:
data._path = dumper.gen_unique_name() + ".yaml"
Path(os.path.dirname(data._path)).mkdir(exist_ok=True)
with FilesystemDumper(os.path.join(dumper._basedir, data._path)) as childDumper:
childDumper.represent(data._content)
return dumper.represent_scalar(u'!include', data._path)
yaml.add_representer(SplitValue, split_value_representer, FilesystemDumper)
def transp_dict_representer(dumper, data):
return dumper.represent_dict(data)
yaml.add_representer(TransparentDict, transp_dict_representer, FilesystemDumper)
def transp_list_representer(dumper, data):
return dumper.represent_list(data)
# example usage:
# explicitly specify values that should be split.
myData = TransparentDict({
"spam": SplitValue({
"spam": SplitValue("spam", "spam.yaml"),
"egg": SplitValue("sausage", "sausage.yaml")}, "spammap/main.yaml")})
with FilesystemDumper("root.yaml") as dumper:
dumper.represent(myData)
# load values from stored files.
# The loaded data remembers which values have been in which files.
with FilesystemLoader("root.yaml") as loader:
loaded = loader.get_single_data()
# modify a value as if it was a normal structure.
# actually updates a SplitValue
loaded["spam"]["spam"] = "baked beans"
# dumps the same structure as before, with the modified value.
with FilesystemDumper("root.yaml") as dumper:
dumper.represent(loaded)

can't update the attribute with ActiveRecord

I want to swap the content in answers table with ActiveRecord.
code 1:
Archieve::Answer.find_each do |answer|
str = answer.content
dosomething() #change the value
answer.update_attribute(:content,str)
end
But It doesn't change the value of content.
code 2:
Archieve::Answer.find_each do |answer|
str = answer.content
dosomething() #change the value
answer.reload
answer.update_attributes(
:content => str
)
end
Before update the :content attributes, I reload the record every time.
It can indeed change the the value.
Why?
What's the difference between code 1 & code 2?
Source Code
###1 Post Debug Message:
Updated Post:
Changed?: false
valid?: true
errors: #<ActiveModel::Errors:0xa687568>
errors: #<ActiveModel::Errors:0xa687568 #base=#<Archieve::Answer id: 9997190932758339, user_id: 4163690810052834, question_id: 3393286738785869, content: "狗狗生病,好可怜呀,", is_correct: false, votes_count: 0, comments_count: 0, created_at: "2011-11-06 18:38:53", updated_at: "2011-11-06 18:38:53">, #messages={}>
possible ActiveRecord 3.1.1 bug
The OP mentioned to me that he uses require "active_record" in a stand alone script (not using rails runner).
There is no separate Rails application for his task, he just uses a script. This is not necessarily bad, and has worked in earlier ActiveRecord versions, e.g. 2.x AFAIK -- maybe this is a regression in Rails 3.1 due to a new dependency?
# the OP's require statements:
require 'rubygems'
require 'logger'
require 'yaml'
require 'uuidtools'
require 'active_record'
complete code here: https://raw.github.com/Zhengquan/Swap_Chars/master/lib/orm.rb
maybe a dependency is missing, or problem with AR 3.1.1 when initialized stand alone?
It could be a bug actually
It could be that update_attribute() triggers a bug in the dirty-tracking of attributes, which then incorrectly assumes that the object has not changed, and as a result it will not be persisted, although the implementation of update_attribute() calls save() (see code fragment below).
I've seen something like this with an older version of Mongoid -- could be that there is a similar hidden bug in your ActiveRecord version for update_attribute()
In the Rails Console monkey-patch update_attribute like this:
class ActiveRecord::Base
def update_attribute(name, value) # make sure you use the exact code of your Rails Version here
send(name.to_s + '=', value)
puts "Changed?: #{changed?}" # this produced false in the OP's scenario
puts "valid?: #{valid?}"
puts "errors: #{errors.inspect}"
save
end
end
then try to run your Code 1 again...
you shouldn't see "Changed?: false".. if it returns false, although you changed the attribute, then there is a bug in your ActiveRecord version and you should report it.
Code 1:
NOTE: check the definition of update_attribute() (singular) here:
(please read the fine-print regarding validations -- it doesn't sound like a good idea to use that method)
http://ar.rubyonrails.org/classes/ActiveRecord/Base.html#M000400
See also:
Rails: update_attribute vs update_attributes
The source code for update_attribute() looks like this:
2260: def update_attribute(name, value)
2261: send(name.to_s + '=', value)
2262: save
2263: end
it could fail if there is a bug with the dirty-tracking of attributes...
Code 2:
The second code looks correct.
There are a couple of things to also consider:
1) which attributes did you define as accessible, via attr_accessible ?
e.g. only accessible attributes will be updated via update_attributes()
http://apidock.com/rails/ActiveRecord/Base/update_attributes
2) which validations do you use?
are you sure the validations pass for the record when you call update_attribute?
See also:
http://guides.rubyonrails.org/active_record_querying.html
http://m.onkey.org/active-record-query-interface
http://api.rubyonrails.org/classes/ActiveRecord/Base.html

How can I access the information associated to an object from a Mercurial plugin?

I am trying to write a small Mercurial extension, which, given the path to an object stored within the repository, it will tell you the revision it's at. So far, I'm working on the code from the WritingExtensions article, and I have something like this:
cmdtable = {
# cmd name function call
"whichrev": (whichrev,[],"hg whichrev FILE")
}
and the whichrev function has almost no code:
def whichrev(ui, repo, node, **opts):
# node will be the file chosen at the command line
pass
So , for example:
hg whichrev text_file.txt
Will call the whichrev function with node being set to text_file.txt. With the use of the debugger, I found that I can access a filelog object, by using this:
repo.file("text_file.txt")
But I don't know what I should access in order to get to the sha1 of the file.I have a feeling I may not be working with the right function.
Given a path to a tracked file ( the file may or may not appear as modified under hg status ), how can I get it's sha1 from my extension?
A filelog object is pretty low level, you probably want a filectx:
A filecontext object makes access to data related to a particular filerevision convenient.
You can get one through a changectx:
ctx = repo['.']
fooctx = ctx['foo']
print fooctx.filenode()
Or directly through the repo:
fooctx = repo.filectx('foo', '.')
Pass None instead of . to get the working copy ones.