Is real namespacing (like the ones used in C#/Java) possible with Polymer at the moment?
Something like:
If not, where would be the best locations to communicate with the team and know their thoughts about it?
Namespacing is not really supported in HTML. Here's a quote from the WHATWG FAQ on it:
However, unlike the XHTML serialization, there is no real namespace syntax available in the HTML serialization
The topic has come up a number of times but, at least to my knowledge, there are no immediate plans to implement namespacing. In the future it may be possible to access the element registry to create aliases, but that's also just conjecture on my part.
If you'd like to discuss it the best place is probably the mailing list.
Related
I have a few related questions about pragmas. What got me started on this line of questions was trying to determine whether it's possible to disable some warnings without going all the way to no worries (I'd still like to worry, at least a little bit!). And I'm still interested in the answer to that specific question.
But thinking about that issue made me realize that I don't really understand how pragmas work. It's clear that at least some pragmas take arguments (e.g., use isms<Perl5>). But they don't seem to be functions. Where do they fit into the overall MOP? Are they sort of like Traits? Or packages? Is there any way to introspect over them? See what pragmas are currently in effect?
Are pragmas built into the language, or are they something that users can add? When writing a library, I'd love to have some errors/warnings that users can optionally disable with a pragma – is that possible, or are they restricted to use in the compiler? If I can create my pragmas, is there a practical difference between setting something with a pragma versus with a dynamic variable, aside from the cleaner look of a pragma? For that matter, how do we decide what language features should be set with a pragma versus a variable (e.g., why is $*TOLERANCE not a pragma)?
Basically, I'd be interested in any info about pragmas that you could offer or point me towards – though my specific question is still whether I can selectively turn off certain warnings.
Currently, pragmas are hard-coded in the handling of the use statement. They usually either set some flag in a hash that is associated with the lexical scope of the moment, or change the setting of a dynamic variable in the grammar.
Since use is a compile time construct, you can only use compile time constructs to get at them (currently) (so you'd need BEGIN if it is not part of a use).
I have been in favour of decoupling use from pragma's in the past, as I see them as mostly a holdover from the Perl roots of Raku.
All of this will be changed in the RakuAST branch. I'm not sure what Jonathan Worthington has in mind regarding pragmas in the RakuAST context. For one thing, I think we should be able to "export" a pragma to the scope of a use statement.
I have created a Singleton class that handles my project texts. What is the appropriate name of a Singleton class like this?
TextManager?
TextHandler?
TextController?
Is there a difference in meaning of these names?
UPDATE:
The class stores the project text as xml and have a method for returning the correct text.
function getText(uid : String) : String
I suppose it doesn't deal with adding/removing/... (-> managing) the texts (maybe just loading), so it isn't a "real" Manager.
It also doesn't "control" the texts (something "You're only accessible from ...", "Return another value for that key if ...").
The class provides you with texts.
I suppose it's some Kind of localized text provider, right?
So why don't you call it LocalizedTextProvider?
I usually call something like this
TextUtility
or
TextHelper
the problem with 'handler' is that it implies some sort of event handling. Same thing with 'Controller', it has meaning in a different context.
I believe Controller is 'reserved' for the MVC model but I may be wrong. TextHandler and TextManager may be better but at least at the place I work, 'Manager' in a service/class is generally discouraged since it is assumed that every class 'manages' something (this may just be culture-specific, though).
I'd vote for TextHandler out of those three. It may also depend slightly on your programming language.
This actually sounds like a service or repository to me...
TextService or TextRepository? TextModel?
But let me back up a bit... the Singleton pattern is a pretty bad way of accessing something like this. Just google "Singleton pattern problems" if you want to see what I am talking about. Plus, in AS3, you don't have private constructors so you can't implement the Singleton pattern in a pure way.
Instead, I really prefer composition via "Inversion of Control" (IoC) containers. There are plenty of them out there for ActionScript. They can be really lightweight but they decouple your components in a really elegant way.
Sorry to inject my thoughts here... ymmv :)
EDIT -- More on eliminating Singleton pattern
I have written about several strategies on eliminating singletons in your code. This article was written for C#, but all the same principles apply. In that article, I DON't talk explicitly about IoC containers.
Here is a pretty good article about IoC in Flex. In addition, several frameworks give you IoC capabilities:
Swiz
Robot Legs
fling
Cairngorm
flex-ioc
All three of the names you proposed can all be interpreted in the same way. Some people prefer handlers while others might say controllers... it really is a matter of semantics. Whatever convention you choose to adopt just be consistent. The common notion that you should capture though is that the class which you are describing is not doing anything. It should only be in charge of delegating, since that's what managers do to employees and controllers do in the classic MVC paradigm.
As I usually have Handler in the event/message handling context. Controller for actions and MVC stuff, I would go with something different:
TextResources.get(key)
I18n.get(key) (if your class is in fact used for internationalisation)
I usually reserve Helpers for classes allowing to simply transform some data into something to be used in the view.
TextCache? Sounds like you are just using it to store and retrieve data...
Why not : ProjectNameTexts
FooTexts.getInstance().getText('hello_world');
In Podcast 58 (about 20 minutes in), Jeff complains about the problems of HTML.Encode() and Joel talks about using the type system to have ordinary strings and HTMLStrings:
A brief political rant about the evil of view engines that fail to HTML
encode by default. The problem with
this design choice is that it is not
“safe by default”, which is always the
wrong choice for a framework or API.
Forget to encode some bit of
user-entered data in one single
stinking place in your web app, and
you will be totally owned with XSS.
Believe it. I know because it’s
happened to us. Multiple times!
Joel maintains that, with a strongly-typed language and the right
framework, it’s possible (in theory)
to completely eliminate XSS — this
would require using a specific data
type, a type that is your only way to
send data to the browser. That data
type would be validated at compile
time.
The comments at the blog post mention using static analysis to find potential weaknesses. The transcript Wiki isn't done yet.
Is it possible to implement Joel's suggestion without having a new ASP.NET framework?
Might it be possible to implement it simply by subclassing every control and enforcing new interfaces based on HTMLString? If most people already subclass controls in order to better able to inject site-specific functionality, wouldn't this be fairly easy to implement?
Would it be worth doing this instead of investing in static analysis?
To use HtmlString everywhere, you would essentially have to rewrite every property and method of every web control. System.String is sealed, so you can't subclass it.
An easier (but still very time consuming) approach would be to use control adapters to replace web controls with safe alternatives. In this case, you would subclass each web control and override the Render methods to HTML-encode dynamic content.
How does one study open-source libraries code, particularly standard libraries?
The code base is often vast and hard to navigate. How to find some function or class definition?
Do I search through downloaded source files?
Do I need cvs/svn for that?
Maybe web-search?
Should I just know the structure of the standard library?
Is there any reference on it?
Or do some IDEs have such features? Or some other tools?
How to do it effectively without one?
What are the best practices of doing this in any open-source libraries?
Is there any convention of how are sources manipulated on Linux/Unix systems?
What are the differences for specific programming languages?
Broad presentation of the subject is highly encouraged.
I mark this 'community wiki' so everyone can rephrase and expand my awkward formulations!
Update: Probably didn't express the problem clear enough. What I want to, is to view just the source code of some specific library class or function. And the problem is mostly about work organization and usability - how do I navigate in the huge pile of sources to find the thing, maybe there are specific tools or approaches? It feels like there should've long existed some solution(s) for that.
One thing to note is that standard libraries are sometimes (often?) optimized more than is good for most production code.
Because they are widely used, they have to perform well over a wide variety of conditions, and may be full of clever tricks and special logic for corner cases.
Maybe they are not the best thing to study as a beginner.
Just a thought.
Well, I think that it's insane to just site down and read a library's code. My approach is to search whenever I come across the need to implement something by myself and then study the way that it's implemented in those libraries.
And there's also allot of projects/libraries with excellent documentation, which I find more important to read than the code. In Unix based systems you often find valuable information in the man pages.
Wow, that's a big question.
The short answer: it depends.
The long answer:
Some libraries provide documentation while others don't. Standard libraries are usually pretty well documented, whether your chosen implementation of the library includes documentation or not. For instance you may have found an implementation of the c standard library without documentation but the c standard has been around long enough that there are hundreds of good reference books available. Documentation with hyperlinks is a very useful way to learn a new API. In any case the first place I would look is the library's main website
For less well known libraries lacking documentation I find two different approaches very helpful.
First is a doc generator. Nearly every language I know of has one. It basically parses an source tree and creates documentation (usually as html or xml) which can be used to learn a library. Some use specially formatted comments in the code to create more complete documentation. JavaDoc is one good example of this. Doc generators for many other languages borrow from JavaDoc.
Second an IDE with a class browser. These act as a sort of on the fly documentation. Some display just the library's interface. Other's include description comments from the library's source.
Both of these will require access to the libraries source (which will come in handy if you intend actually use a library).
Many of these tools and techniques work equally well for closed/proprietary libraries.
The standard Java libraries' source code is available. For a beginning Java programmer these can be a great read. Especially the Collections framework is a good place to start. Take for instance the implementation of ArrayList and learn how you can implement a resizeable array in Java. Most of the source has even useful comments.
The best parts to read are probably whose purpose you can understand immediately. Start with the easy pieces and try to follow all the steps that are hidden behind that single call you make from your own code.
Something I do from time to time :
apt-get source foo
Then new C++ project (or whatever) in Eclipse and import.
=> Wow ! Browsable ! (use F3)
Does anyone out there know about examples and the theory behind parsers that will take (maybe) an abstract syntax tree and produce code, instead of vice-versa. Mathematically, at least intuitively, I believe the function of code->AST is reversible, but I'm trying to find work/examples of this... besides the usual resources like the Dragon book and such. Any ideas?
Such thing is called a Visitor. Is traverses the tree and does whatever has to be done, for example optimize or generate code.
Our DMS Software Reengineering Toolkit insists on parsers and parser-inverses (called "prettyprinters") as "poker-ante" to mechanical processing (analyzing/transforming) of arbitrary languages. These provide full round-trip: source text to ASTs with captured position information (file/line/column) and comments, and AST to legal source text including regenerating the original token positions ("fidelity printing") or nicely formatted ("prettyprinting") options, including regeneration of the comments.
Parsers are often specified by a combination of grammars and lexical definitions of tokens; these notations are typically compiled into efficient parsing engines, and DMS does that for the "parser" side, as you might expect. Other folks here suggest that a "visitor" is the way to do prettyprinting, and, like assembly code, it is the right way to implement prettyprinting at the lowest level of abstraction. However, DMS prettyprinters are specified in terms of a text-box construction language over grammar terms something like Latex, that enables one to control the placement of the various language elements horizontally, vertically, embedded, spaced, concatenated, laminated, etc. DMS compiles these into efficient low-level visitors (as other answers suggest) that implement the box generation. But like the parser generator, you don't have see all the ugly detail.
DMS has some 30+ sets of these language front ends for a various programming langauge and formal notations, ranging from C++, C, Java, C#, COBOL, etc. to HTML, XML, assembly languages from some machines, temporaral property specifications, specs for composable abstract algebras, etc.
I rather like lewap's response:
find a mathematical way to express a
visitor and you have a dual to the
parser
But you asked for a sample, so try this on for size: Visual Studio contains a UML editor with excellent symmetry. The way both it and the editors are implemented, all constitute views of the model, and editing either modifies the model resulting in all remaining in synch.
Actually, generating code from a parse tree is strictly easier than parsing code, at least in a mathematical sense.
There are many grammars which are ambiguous, that is, there is no unique way to parse them, but a parse tree can always be converted to a string in a unique way, modulo whitespace.
The Dragon book gives a good description of the theory of parsers.
There are theory, working implementations and examples of reversible parsing in Haskell. The library is by Paweł Nowak. Please refer to
https://hackage.haskell.org/package/syntax
as your starting point. You can find the examples at following URLs.
https://hackage.haskell.org/package/syntax-example
https://hackage.haskell.org/package/syntax-example-json
I don't know where to find much about the theory, but boost::spirit 2.0 has both qi (parser) and karma (generator), sharing the same underlying structure and grammar, so it's a practical implementation of the concept.
Documentation on the generator side is still pretty thin (spirit2 was new in Boost 1.38, and is still in beta), but there are a few bits of karma sample code around, and AFAIK the library's in a working state and there are at least some examples available.
In addition to 'Visitor', 'unparser' is another good keyword to web-search for.
That sounds a lot like the back end of a non-optimizing compiler that has it's target language the same as it's source language.
One question would be whether you require the "unparsed" code to be identical to the original, or just functionally equivalent.
For example, would it be OK for the output to use a different indentation style than the original? That information wouldn't normally be stored in the AST because it's not semantically important.
One thing to look at would be automatic code refactoring tools.
I've been doing these forever, and calling them "DeParse".
It only gets tricky if you also want to recapture whitespace and comments. You have to tuck them into the parse tree so you can regenerate them on output.
The "Visitor Pattern" idea is good. But, I should consider "Visitor" pattern as a lineal list pattern, or, as a generic pattern, and add patterns for more specific cases like Lists, Matrices, and Trees.
Look for a "Hierarchical Visitor Pattern" or "Tree Visitor Pattern" on the web.
You have a tree data structure ("Collection") and want to do something with the data, each time you "visit", "iterate" or "read" an item from the tree.
In your case, you have a tree data structure, that represents the result of scanning/parsing some source code. Then you have read each item's data, and transform it into destination code.
There are several "lens languages" that allow bidirection transformation of source code.
It is also possible to implement reversible parsers using definite clause grammars in Prolog. In SWI-Prolog, the phrase/3 predicate converts parse trees into text and vice-versa. This book provides some additional examples of reversible parsing in Prolog.