i noticed a very strange way of naming classes in G+ and gmail..
example: a-b-h-Jb a-b-Rf-dB a-Rf-dB d-s-r (see G+'s code for yourself!)
who the hell does that? impossible to keep track of what you did in future.. same for gmail.
it is a known way of doing css that i am unfamiliar with? is it OOCSS? if a googler is reading this, can you please explain? Or if you are not the one who wrote the code, then please share your thoughts or prove that i am a dumb ass and don't know about a fairly common css naming 'good practice' (can i even call it that?)
Google uses something called the Google Web Toolkit (or simply GWT) to compile Java "applications" into their Javascript/HTML/CSS counterparts. GWT was used for GMail and Google Wave and my assumption is that it was also used for G+.
The GWT "compiler" (CS purists would never call GWT a compiler but the term fits) programatically names Javascript functions, CSS classes, HTML form IDs, etc. so they are almost never something legible.
At a guess, they probably have everything written out nicely in full at some point, and then put it through some program to compress it (reduce the length of variables). This reduces readability but also reduces file size, improving load times in theory.
Related
When building applications, especially when using static linking and having a lot of dependences, I often feel that most of this 50-megabyte executable is just unused bloat, especially if consider only the mode I want.
Is there something that lets you run the program in various scenarious, collect data and build the program again (or tinker already compiled code) to remove the unvisited code (replacing things with abort)? If yes, how is it correctly called and where is it implemented?
I'm perfectly willing to use techniques, rather than tools.
What I've done for your problem is get a map file and just look through it.
There may well be a lot of methods for classes you doubt you need. Find out what references put them in there. Chances are it's just because somewhere something fancy was coded, like a bells-and-whistles container class, when something simple would do. Or a whole math library when all you needed was max.
After fixing that, the map file is smaller, and something else is the biggest thing in it, so you can do it all again.
And again...
This can cut out gobs of bloated binary.
I've been working on the web for quite a long time and I saw the "best practices" evolve. I'm now fairly convinced separating HTML (Content), Javascript (Behavior) and CSS (UI) is the best thing to do.
A few months ago, I started using knockout.js . I did choose it among other similar frameworks like backbone or angular because a chapter in an MVC training I followed was about knockout, and the concept seduced me. Then after a quick comparison on the web it didn't look as a bad choice for my needs, and for a start.
But here's my problem : when I look at my HTML code now, after a few weeks of dev on a project, there's quite a lot of knockout bindings in it, and it makes me think a lot about the old times, when we (or at least I) used to put inline javascript event handling through onclick attribute and so on.
Therefore those 2 questions, which I'm not sure are 100% suited for SO, but I can't find any better StackExchange site to ask it :
Is using knockout (or the other frameworks as they all seem to basically work with the same pattern) contrary to the "separation rule" ? Or is it an acceptable small-step-out of this rule ? or is it even perfectly acceptable because it uses the "data-" attributes ?
In the case this would be a somehow bad practice, is there any possibility to do all the binding through a separate javascript file, using for example jQuery to select the controls and apply bindings to them ? If not possible in knockout, is it with another framework ? I must admit at the time I did my selection, I diddn't think about this kind of implications...
Thank you and sorry if this should be moved to another SE site.
I had the same initial reservations as you, but I have to say that having the bindings in the html and not hidden away in a JS file seems so much better to me, as the link between presentation and functionality is now completely obvious. It massively reduces the possibility of changing some HTML and breaking functionality because you weren't aware that someone had hooked up some javascript to an element using jQuery.
Also, as you point out, the use of the data-bind attribute does, I think, mean that it does adhere to the separation rule, though if you want to stick to it rigidly then make sure all bindings are to observables, computed or functions on your view model, don't use any code (i.e. a visible binding that checks the state of two observables). I'm not sure I'd take it that far though.
I guess everyone started to learn KnockoutJS have the same concerns.
IMHO, there must be some way that connects models(JS object) with views(HTML markup). Then we should have something that says:"When that button is clicked call this function with that arguments." or "Hide this element while you that JS array is empty" and so on. So how we can put/say/state that connection in a readable, reusable and clean way.
If you used another JS file to handle that connection, then you 'll have large lines of code just to put your connection logic and you need to know how to select the DOM element you are targeting. You 'll end up with massive code(probably lot of jQuery) just to make your HTML dynamic and alive(i bet most developers got into that many times). I haven't use other libraries or frameworks but i think they just make your massive code more organized.
On the other hand by KnockoutJS use Declarative Bindings, this is the link between models and views. It's more readable, easy to plug it in/out and it allow you to just focus on writing a good JS model object.
I guess to truly check separation think what if you sometime needed to change your model, how much changes you need to do to your view? and vice versa?
Adding to the rest of the answers, some tips:
Make sure there's no business logic in your bindings. Any logic (which should be minimal) should be "view logic": decisions that only affect how your view looks, not how it works. Deciding how many items to display per screen is view logic; deciding whether the user can add another item is business logic. It's perfectly OK to put view logic in your viewmodel rather than your view, and it's desirable if it involves lengthy expressions.
Keep "magic numbers" out of any view logic in your bindings. If it's a parameter that could be changed (e.g. number of weeks of results to show) as opposed to a true constant (e.g. number of days in a week), make it a property of your viewmodel and reference it in any expressions in your views.
I am pretty new to developing softwares and am intrigued by the huge world out there!! I have working knowledge of C/C++ and Java.. I was thinking of making an application that would convert a webpage to a pdf document.. I know there are many solutions available -- both online and offline..But I want to develop my own.. I googled but couldn't find anything that would help me get started..
I want to know how do we go about a conversion process?? How to get started?? What languages and technologies are pre-requisites for making a converter like this??
Thank You
So at least you need to get to the bottom to following specifications:
HTML specification
CSS specification
JavaScript specification
PDF specification
Moreover here are a lot of minor stuff such as Fonts, Decription/Encription algorithms and many many other minor but still necessary things.
I think you can imagine that this is quite a long way to get all this working. In fact, the complexity of such software is the reason why so many companies make money in this field.
Anyway, I'd suggest you to start from the simple things and grow your software gradually. Start with converting HTML to Image, because it is a bit simpler. Take and parse HTML, its CSS, its JavaScript. Clean HTML. Build DOM of the HTML document. Apply styles. Go thru the DOM and draw elements to the image.
Good luck!
Why HTML/JavaScript/CSS are not becoming compiled languages (or maybe even merge into a single compiled language)? What if browsers were running "Browser Virtual Machine" and html/javascript/css sources could by compiled to a "browser bytecode". Wouldn't it help developers and users a lot?
I can see a few challenges:
What to do with zillions of existing pages? Make this compilation optional, so if you want you can use plain old html. If you want to feed a browser with a compiled page just use .chtml for example.
How search providers would index pages? Make a decompiler that would decompile bytecode into exact original sources (for example like flash can be decompiled). Or search providers can use the same virtual machine and get data they need from there.
How to make it compatible with all browsers? Have one centralized developer (lets say w3c) to develop this virtual machine and then each browser would embed it.
But what about benefits:
Speed.
Size.
No more "loose" and "half-correct" html. It is either correct or won't compile.
Looks the same in every (supported) browser.
If not a bytecode then at least have some native compression going on, html probably is not the most efficient way of data storing. I know there is gzip but why to compress pages every time on a server and decompress in a browser if we can compress it once and feed it to a browser?
So what stops us from taking this road (well, besides a huge amount of effort to make it all happen)?
Ah, but Javascript IS becoming a compiled language. Check out Firefox 3.5 with TraceMonkey. It's insanely fast compared to um you-know-who's browser. It's true that JS will never be C, but it's a much more dynamic language than C is, and in many ways that makes it more expressive and powerful.
As far as HTML goes, I don't think that the lack of validity of HTML is a huge detriment to speed. I think the engines that put together the visual representation and manipulate the DOM need to get a lot better (um, IE, I'm looking in your general direction...). CSS compliance needs to get better, and CSS itself needs to get more powerful. (Get on the bus with CSS 3 people!)
But I do think that speed is going to get better on Firefox and Chrome to such an extent that people really ARE going to start using it for mainstream application development. It's funny. Adobe seems to be selling Flash as their platform for dynamic web content, MSFT is selling Silverlight for dynamic web content, and Google just wants to really improve HTML and Javascript to display dynamic web content. And Google's doing pretty well at it so far, I must say...
Your ideas have validity when they are applied to JavaScript. As others have noted, to one degree or another several vendors are trying to apply those principles to JS even now. Another big step in this area will likely be the Chrome OS Google has announced. However, when it comes to (X)HTML and CSS I think your ideas may be missing the point.
The world wide web is not a buggy and inconsistent application platform but a massive and unprecedented collection of interconnected documents. The power of the web is in the abstraction of the data from the often rigid (and breakable) visual layouts and increasingly complex in-page functionality largely provided via JavaScript. Encoding these pages in (X)HTML is ideal for making them accessible to the widest possible audience both in terms of browsers and in terms of technical knowledge required to author a page.
More and more the web is being used as an application platform - which is a powerful and exciting use of this technology - but we cannot lose sight of the fact that these Ajax-driven "web 2.0" apps are merely documents with extended functionality. Compilation doesn't make sense for a document and compression is already happening (via gzip and the like).
On a more practical note, the W3C moves at a glacier's pace and browser vendors take turns between jumping-the-gun supporting experimental features in unfinished specs and taking their sweet time supporting other specs which have been on the table and in common usage for years. The whole processes is like herding cats. I wouldn't hold my breath for them to make the kind of radical changes you're proposing any time soon.
Since HTML and CSS aren't code they can't be compiled. Google Chrome's V8 engine does actually convert JS into byte code, expect other rendering engines to follow suit!
http://code.google.com/apis/v8/design.html
We recently reworked a php template system I've helped create to use minify to compress multiple JS and CSS into one file each, seeing our file sizes drop to about 20% of the origial combined sizes. Minify also does gzip and caching so it's really amazing for speeding up websites.
http://code.google.com/p/minify/
In short you can't compile non-code, which HTML and CSS are. JS can be compiled and is starting to be, but all depends on what browsers feel like doing.
Browsers just need to be on the ball regarding supporting web standards. The more browsers do this, the less headache us web developers have. I was quite happy with YouTube's very public drop of support for IE6. We need more action like that for the web to move forward.
The V8 javascript engine (also embedded in Google Chrome, but it's open-source and liberally licensed so you're welcome to use it in the next browser you write!) does compile Javascript to native machine code -- of course, it does it "just in time" (like most modern compilers -- Java, C#, etc!), not "ahead of time" (like Fortran did in 1954 when computers were just too weak to handle compilation in the midst of execution). I'd be surprised if other good JS engines, like those in the very latest Firefox and Safari, didn't do the same.
Looks like you're not advocating "javascript as a compiled language" (since it obviously already IS compiled, if you're using a good JS engine), but rather "ahead-of-time" compilation for it (just when most modern languages are essentially abandoning ahead-of-time compilation). Pushing machine code rather than compilable code down the wire sounds like a mostly horrible idea -- much larger size, difficulties in supporting one CPU vs another, security nightmares in properly sandboxing it, etc, etc) with not much in term of compensating benefits.
That said, if you're really keen on pushing machine code to the client, try out nativeclient (as long as the client is an x86 machine - forget every smart phone on the planet, many netbooks, good old macs, etc) -- at least it promises a fix to the security nightmares. If and when you're happy with nativeclient, transforming a just-in-time compiler into an ahead-of-time one is a far easier technical challenge (if you want to keep using Javascript for the sources rather than other languages, of course).
See here for a previous discussion on the matter
Not all of the reasons given are necessarily valid, but one important one is that, unless you're Google, server-side CPU cycles are a lot more valuable than client-side cycles: so it's easier to have the client compile/optimize what is quite often dynamically generated HTML/JavaScript, rather than the server.
Ken
Speed.
You're assuming that it takes significant time to parse HTML. However it might be that that time is insignificant compared to the time required for something else, e.g. the time required to layout the text on the end-user's window.
No more "loose" and "half-correct" html. It is either correct or won't compile.
You already get that, using [X]HTML.
Looks the same in every (supported) browser.
You seem to be saying that there should only be one browser, or that all browsers would support it equally.
Internet standards don't happen by having a single body (the w3c) implementing something and declaring it a standard. Instead, internet standards happen by having multiple independent bodies creating multiple implementations. A consequence is:
Some people have developed something that isn't standard yet (i.e. they're ahead of the standard)
Some people haven't yet developed something that is standard (i.e. they're behind of the standard)
I think your idea is sound, however there's still no way to enforce a standard. Thus if there was a non-supported feature, there's a good chance the entire page would simply not display anything. In the current setup, critical information can still be passed.
Google V8, which is one of many new-generation javascript engines 'compiles' javascript into pseudocode, much like .NET 'compiles' c# on the fly. Nothing magical here. Expect more of it esp. as webapps get heavier and more demanding
HTML
HTML is pretty much XML. DTD'd exist for various versions and developers can check against that at any time.
CSS
CSS is not a programming language, however I do agree that "compiled" CSS could work seeing as compilation would compress it. However with the support that CSS has and with the number of essential hacks any CSS needs to have, you'd never manage to compile it without errors.
JS
As others have mentioned, JS IS becoming a compiled language except the browser compiles it for you and not you yourself.
What are the pros and cons of using a WYSIWYG editor for web page development vs hand coding?
With the exception of just not knowing how to create something by hand coding is there any reasons to use WYSIWYG?
I handcode, but I prefer to work with a wysiwyg editor in tow, and for that reason I'm still using Dreamweaver as an editor. What I'm doing 95% of the time is handcoding inside the Source editor and viewing the results in the preview. Occasionally I'll drop into the wysiwyg editor to move blocks around directly though and when I do I find it invaluable. I never use any of Dreamweavers wizards or generated code and I clean up the html manually too.
I see nothing wrong with this approach, it strikes me as the HTML design equivalent of an IDE prompting to complete functions etc. (intellisense or whatever your IDE may call it)
I also always use a templating system of one form or another so my scripting code is totally separate from html.
The combination with Dreamweaver of the occasional wysiwyg edit (invaluable I find when laying things out or making 'macro' layout changes) and the one click preview has kept me with it despite looking at better tools - Aptana, NetBeans etc. Indeed I would dearly like to move to another system - see this question - preferably something that runs on Ubuntu and strips out the crud in Dreamweaver leaving just the wysiwyg features and possibly an intelligent Javascript editor, but I'm yet to find anything. KompoZer is starting to look promising though.
There are a variety of reasons to use a WYSIWYG editor when creating HTML.
Allows for quick prototyping
Allows designer-y people to be actively involved in front end development
Some WYSIWYG tools will set you up with a clean base to be modified (Dreamweaver's CSS layouts are actually pretty good)
I think the important thing to remember is that after you get it into approximate shape, you should dig into the code and make sure there's nothing weird going on. Nested spans, odd absolute positioning, and (lord almighty) table based layouts count as weird things. Even if you use a WYSIWYG to start with, you should always check that the code is valid and looks the way you would expect it to.
WYSIWYG can be handy if you don't know HTML or just want to whip something together extremely fast. You're not going to get clean code, though. Most WSYIWIG editors still throw out a bunch of unneeded dirty HTML instead of clean solid markup.
Anyone familiar with HTML can usually whip up something just as fast by hand in an HTML editor. And it will be clean, xhtml compliant semantic markup instead of thrown together templates with extraneous crud.
If you set up the template and css properly, you can probably be faster with hand coding than a WSYIWYG editor, as those work against you when you're trying to create properly abstracted css with degradable semantic markup.
If the design isn't terribly important and you're just throwing a website together there's nothing wrong with using a WYSIWYG. Or if you're trying to create a marginally functional mock up for a client it's a good way to get something built quickly.
I develop in ASP.net most of the time, so I'm in VS2008 most of the time; however whenever possible (which is most of the time) I still-hand code....but I do it in VS2008's source mode. When working with ASP.net, theres always somewhat bloated code which you just sort of have to accept (to a point).
However, in my free time, I also do php development, and like hell will I ever not hand-code with php. Plus, its not like VS with the drag and drop stuff.
If you want to be really good at what you do, as in Guru like good, drop the WYSIWYG stuff and start hand coding. The learning curve is steeper, but it makes you better at what you do in a meaningful way.
It comes down to maintainability and changeability. It is usually much easier to change a GUI layout in a GUI editor than by hand.
"Oh you want to move that JTable from this position to this other completely unrelated position". If you have handcoded it, it basically turns out to be a programming job (which for non-trivial layouts might actually be HARD), but if it is in a good GUI editor, it is probably just a matter of point-click-move-release.
People who handcode probably never have had to do that kind of changes :)
The advantages of using a WYSIWYG editor for web development are pretty obvious. Development is much simpler and faster even if you know how to code web since web development requires to know many different languages and can get messy when trying to get them to work together as planned. Real WYSIWYG designers should be able to solve those complexities by allowing you to visually develop on one form in one layer.
The disadvantages of this kind of development paradigms can be that it sometimes limits you, meaning that you are usually constrained within a predefined framework.
Therefore it is important to find a framework that on top of its WYSIWYG development experience is open to extension and customization. Take a look at http://www.visualwebgui.com/.
This is the same type of thing as Glade versus hand coding your Gtk code. I think that you add a level of obfuscation and things that might break when you hand edit your code. However, as Spencer said, if you need to do it and it needs to work; usualy WYSI wil work pretty well and reliably. If you're doing something that you're going to be keeping up to date and be managing for years to come; you should know every piece of code that is in that application/web page.
Really it comes down to your job function. If you're primarily a designer, WYSIWYG editors can be very handy for creating mock-ups for clients, or prototypes that can be handed to developers to code against.
If you're a developer, you'll probably prefer to hand-code.
Most WYSIWYG editors offer a code view and design view which enables you to switch back and forth pretty easily.
My suggestion is to try and learn how to hand-code your site. After years of web development, I find that hand-coding is faster for me than attempting to use a designer. Moreover, as you gain a better understanding of how HTML and CSS work together you'll find that there's very little that can't be done gracefully.
It can be frustrating to learn, but you'll find that you're better for it in the long run.