Web development has changed dramatically over the last few years. With the enormous amount of JavaScript libraries and the new HTML5 standard, today it is easier to create rich Internet applications (RIA). When building RIAs, you will probably want to reuse some of the web components you built. But how you can do that with the current state of HTML?
I have starded learning Angular 2, that is based on webcomponents
http://webcomponents.org/
Only problem i have is that now there is no rule how to write HTML?
http://w3c.github.io/webcomponents/spec/custom/
Does it means <div></div> will no longer exist?
And HTML can be written as you wish, will be there be some kind of rule, what about HTML5?
Because now you can easy say <header-layout></header-layout> and forgot about validation?
Well i just had a discussion over this with our CTO,
Webcomponents is something of the future and too early to say that when we say <clock></clock> we will get time displayed on our page and if we say <clock timeformat='12'> </clock> boom, we would have a clock in 12hour format. My point here is that it well ahead of the current time.
You could open github and inspect their last commit element and you will see that its a webcomponent. That was the first and the last WebComponent i saw being used on huge scale. here's a link for ref : http://webcomponents.org/articles/interview-with-joshua-peek/
I dont think there are any rules as such for now, like i said its a very future thing.
Related
I know there are several tools available to find unused CSS on a static web page. But in most real world scenarios I encounter, a lot of the CSS is used after some or the other interaction on the page, maybe a new modal opening up or an options popup etc.
In such scenarios, what would you suggest? How do I keep a tab on my ever-growing render blocking CSS?
The only way I guess one could do that is by running regular unused-css-detector type tools in conjunction with Selenium - test known interactions and see whats left unused. But a big assumption here is that I'd need to know all interactions on my page which could use new CSS. Is there a way to achieve my goal without making this assumption?
In an ideal world, I'd be able to post-back all CSS used by a visitor's browser on my page to my server. Then I'd collect data over a month, aggregate, and get a pretty accurate idea about actual unused CSS.
Any good ideas?
I am the author of a tool that is aiming at doing what you are describing. Everywhere I worked, the CSS is this "append-only" thing that is too risky, too time-consuming to clean up. And even when you try, the ROI is so low that it not worth it.
So I am working on a tool that is very similar to what you are describing. The goal is to bring confidence on what can be removed, and to actually do it automatically by submitting pull requests.
A snippet of JavaScript is running in the browser and sends reports of what is being used to a server. Once enough data is accumulated to build some "confidence score", it can create Pull Request automatically to remove selectors that are actually not used.
It is still very early stage, but you are welcome to try it and give some feedback about it.
https://www.bleachcss.com/
I am an intermediate level web designer & web developer. I built 5 official websites for hotels and not only, until now(2 years experience). But I have an uncertainty.
Should I use an already coded template like this for building the next websites? I already did 1 website with that great template and it saved me a good amount of time. My fear is that without that template, I don't have excellent skill for coding components like navbar, footer, making a page element to stay on the page in the position I want, etc.
I'm almost always copying code snippets from the net, let's say not coding from scratch.
My thought is: hey, you're using a template, you're not a real web designer/developer. I was thinking to only use bootstrap + wordpress, but if I use that CANVAS template( it already has lots of bootstrap components coded, css and javascript for almost any situation ) it would save me even more time.
What do you say, guys, what should I do ? Thank you !
Whatever it takes to achieve your objective in the fast manner and have quality output should be your way forward. I would not worry about purism of solutions used as the objective. When you need it, you will know it - as the time will come when you know this template is not enough, then you can go back to custom solutions. Ultimately no one can handle every aspect of the development, using open source, libraries and frameworks is nowadays a skill in itself. Reapplying easy modules will help you to focus on other crucial elements of your app ( move to mobile, speed, additional functionality etc.)
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Like, can I write a business strength application in HTML5 that mimics our current system that contains many highly interactive grids, custom trees and so on.
We have a good working system written in C# WinForms with parts done in WPF and have recently embarked on writing a custom app for the iPad that communicates through WCF with our main server hosts. We now have a very fast custom grid written in C# that compiles to Objective C through MonoTouch and also a cool interactive pie chart.
Now my boss wishes to build a version for Android and I am thinking if we shouldn't really be spending time creating a single HTML5 app that can run on both iPad and Android equivalents.
Thing is - take a grid - my grid on the iPad is fast (re-usable cells etc) - how would I create a grid from a 'dataset' type source in html5? Do I really have to go down creating lots of tags and then submitting them to the browser? Are third party widgets like jqwidgets the answer??
Thanks
This is a bit anecdotal, but my experience has been that HTML5/JavaScript are just not fast enough on mobile devices yet (in 2012) particularly if you are displaying lots of data, especially and you want fast response times with interactive features. Give it another year, and I wouldn't be surprised if this statement becomes outdated as mobile devices continue to evolve.
Mobile web development certainly has its uses currently, e.g., if you want to target the most devices with a single codebase; if you don't have enough development resources and are willing to settle for a non-native experience; if you don't have experience in the languages required for native development, etc.
Given that you already have done the work for the iPhone app, my humble opinion is that it's probably better to move forward with a native Android application -- you will get a much more responsive application for about the same amount of work at this point.
how would I create a grid from a 'dataset' type source in html5?
Depending on what the grid contains, you can do it with plain old HTML tables or with a combination of other elements styled with CSS. There are really many more considerations though-- e.g., does it need to work on small screens (e.g., phones) or just larger screens (tablets). Often, you can't really fit a whole grid on a small screen, so you end up with UIs that aren't really grids anymore. You can take a look at a mobile JavaScript framework such as jQuery Mobile to see how they've done it and maybe even consider using the framework for use in your own application.
The various JavaScript frameworks (eg. JQuery and Dojo) have mobile device-specific widgets of quite high quality. Have a look at http://dojotoolkit.org/features/mobile and http://jquerymobile.com/
How close do these get to native apps developed in ObjectiveC and Java? Not perfect but maybe good enough. You can also use a combination of native and HTML in the same app, some pages native some HTML. I'm doing that kind of thing with IBM's Worklight, but then I would, I work for IBM ;-) Irrespective of specific products I do see folks taking that approach.
In addition to portability, anoyther of the benefits of having much of the app in HTML is that updated versions can be delivered without going via an app-store - this increases agility of functional delivery.
As evidence, I'd say that a lot of Windows 8's apps are just wrapped JS/CSS/HTML, with a few APIs which Microsoft supplies to allow access to hardware/the filesystem.
I wouldn't think that they've gone so far as to make Excel 2013 JS-based...
...however, with that said, they have gone so far as to allow developers to extend their programs with applet views of the data -- those applets are all going to be built on "html5" (again with an MS-Office JS API).
It's not an easy road to go down -- people look to jQuery to be their saviour for these types of things.
This is exactly where jQuery would not be what you wanted, if you were looking to hack together a solution.
For example:
$(".table_cell").click(function () { alert(/*whatever*/); });
People think that jQuery is assigning a delegator to listen to any click on any element containing class="table_cell".
That's really not what it's doing.
It's looping through each one, and attaching an event-listener to each one, directly.
It's these little things that people miss -- people like Twitter, who didn't bother caching references to elements, because jQuery is so easy to hack things together with.
So then you have JS touching (or acting on) dozens or hundreds of individual elements, at all times.
That's not good for anyone.
jQuery isn't bad at all -- it's quite helpful, as a low-level construct to help skirt around browser differences.
Some of its plugins are also all right.
I can't guarantee that they're all high-performing answers to all things.
But some of the plugin-creators understand how to maintain a responsive and well-performing program.
Which ones are right for your exact needs? Who knows, other than you.
Will they perform perfectly, and quickly?
That depends on a lot of different things, of course.
Coming from C#, you might do to look at something like AngularJS.
Angular, itself, uses an internal version of jQuery, to tackle some of the low-level stuff that jQuery has made a solved-problem.
But it allows for data-binding, and pretty simple view templating.
Hammer.js is also a very decent gesture-tracking library.
From there, though, I'd suggest building your own framework, if you want it done the way that you want it to be done.
Nobody knows what your needs are but you, and trying to stuff things into a shoebox, because it's available, isn't always the solution, regardless of what various companies may think...
You can leave most of the node-work to Angular, you can leave the gesture-sensing to Hammer, you can pull out some other basics from jQuery-lite (the no-frills jQ installed inside of Angular, if you don't have jQ on your site), or jQ, itself...
But they're just tools and not answers.
The web can be very responsive if you cache references to elements, rather than querying for them over and over, delegate events, do large structure-changes off-DOM (on cloned-nodes, if necessary), and don't try to treat JS as a traditional inheritance-heavy language, and you remain mindful of how and when to use AJAX (number/frequency of calls versus size of data -- favour fewer calls).
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Recently my company has decided to rebuild an enterprise portal, which will be used by people across the globe to resister there product for extended warranty. They have come up with J2EE ( spring MVC ) and Oracle as technology stack for the Business layer, and have decided to use JSF (java server faces) to design the front-end stuff ( User Interface) I am, being Frontend Engineer, want appose JSF since it will give less control over the generated markup to me, also JSF will inject/generate unnecessary markup in to the page, which will act as unhealthy food for the browser. Also it will become difficult to achieve the Browser Compatibility, since I don’t have any control over the markup generated It is hard to apply correct CSS behavior. Also it not possible to use concept like fluid layout , table less layouts.
All this will result in poor User Experience. My idea is developed UI with hand coded HTML then convert those .html files in to JSP’s and plug those JSP’s in Spring MVC architecture.
Having said all this, I need to present a proposal which will justify the replacement of JSF by HTML for UI layer, your inputs/thoughts and suggestions will valuable, please write back.
Also, I don’t believe XHTML as other option, it has to be HTML? Let me know what do you think and what makes you think that way?
Thanks, for stopping by. I do apologies if reading all this has taken a lot of your time.
What you are stating is true when you're using vintage JSF 1.0/1.1 API with "pure" JSF components. There was no builtin component with which you can represent a HTML <div> element (so that you can accomplish the general page layout on a semantic manner). Also, embedding "plain" HTML in a JSF page was a pain because it got rendered outside and before the JSF component tree. You have to put plain HTML in <f:verbatim> tags over all place. The purists and the unawareness are less or more forced to use <h:panelGrid> (which renders a <table>) to position the elements in the page.
Apart from that, during the early JSF ages, Netbeans shipped with a builtin visual JSF editor which enables you drag'n'drop and bind JSF components visually without the need to write any line of code. This obviously generates a lot of at first sight unnecessary and unmaintainable code and the pixel-precise positioning of the elements were "behind the scenes" achieved with a <h:panelGrid>. Those kind of JSF applications are in view of maintainability and web semanticity a complete disaster.
Most of the negative stories you'll hear about JSF with regard to front end development is related to this. Most of the JSF users/observers/ranters from then are currently still blindly focusing on this because of the bad experience they had and/or they think that JSF is nowadays still the same and/or they see the visual editor as part of JSF while it's "just" another (bad) tool. Also most of the ones who says "JSF sucks" are usually ones who started using it with a visual / drag'n'drop editor without having any solid background knowledge of what's happening under the hoods (especially Servlet API!).
Since JSF 1.2 (which is already over 4 years ago released btw), the <h:panelGroup> component got an extra attribute: layout="block" which will let it (finally) render a fullworthy HTML <div> element so that you can bring a more semantic layout using JSF components only. But it's not only that, JSF 1.2 also comes with an improved view handler which enables embedding plain HTML in line among other JSF components without hassling with <f:verbatim> tags. You could nicely put <div> elements around where you want without adding more verbosity.
Even though this was a major improvement, there were still two other major (however not directly UI related) problems with the standard JSF implementation: 1) state management among requests is hard without grabbing the session scope (think of preserving the same data in tables and dropdowns and the conditions of e.g. rendered attribute); 2) everything goes through POST and you can't nicely invoke JSFish actions through GET.
Since JSF 2.0, which is almost already 1 year old, those problems were covered with a new scope, the view scope, and a new set of components for GET actions. Plus, JSP is replaced by Facelets as default view technology. Facelets greatly eases templating and creating composite components without having to resort with raw Java code and/or custom components/renderers. Even though it's XHTML based, it can perfectly render a HTML5 valid respone with just <!DOCTYPE html>. The XHTML is just there because Facelets is under the hoods using a XML based tool to generate the (X)HTML output. The XHTML based templating does in no way imply that it can only emit XHTML/XML.
All with all, your markup concerns are a non-issue when you're using JSF 1.2 or newer and also XHTML (Facelets) shouldn't be an issue since it can perfectly render HTML5 valid markup.
JSF gives you lot of predefined, rich controls that offer functionality would have to implement manually otherwise. The price for it is giving up control to certain degree in how user interact with the application and about HTML generated. It can also stand in the way of integration with JS libraries.
Debugging and testing is considerably simpler with JSP and specially with Spring.
It really depends on feature set of your web site, skills in the implementation team (and support team), time to deliver constraints, etc.
I personally prefer simpler technologies that give more control to the developer (JSP with Spring MVC) just for the internal elegance of the framework but that is never deciding factor ....
I did a stint as a UI Engineer for Barclays, a global bank. Now, I'll be the first to say that the financial industry has a long way to go when it comes to User Interface anything, and Barclays in particular is behind on their technology. That being said, they do know how to build things that effectively and reliably work, and the UI Lead is one of the most amazing minds I've ever had the opportunity to work with. Also, being a bank they are sticklers for compliance.
We were using exactly the alternative you proposed, and it worked well for us. Sites reliably handled millions of users daily with no negative outcomes. UI work was simple, and as a bonus when the Federal Card act came along, the company could hire basic web folks to come in and do the cutup/html work which engineers could then bolt into the system.
As for your XHTML question, ultimately we chose to go HTML 4.01 strict, and here's why: First, w3 has decided not to advance the XHTML working group...in essence, it's on its way to a slow death. Secondly, 4.01 strict is the closest to the HTML5 standard, and can fairly easily be adapted once 5 support becomes more widespread. A hard requirement for us was full compatibility on IE6, and this allowed us to achieve that goal.
In your negotiations, I would personally argue that it's vital that the final product meet current web standards (W3) because that makes it most possible to achieve a site that is compatible with browsers out there (I say most possible because I'm convinced that Microsoft with find a way, somehow, to break everything that I build eventually...it's how they roll) Secondary for your site might be SEO concerns with non-compliant code, and hindrances to accessibility such as screen reader support. You might also try outputting two similar (simple) sites using either technology and do an analysis of performance. In the case of one website I worked on that was served 1 million times per day, a 5k file size savings translated to 5gb of data daily.
Good luck! This is just one of many reason I got away from big corporate jobs using Java and Oracle....
I think jQuery based components coupled with an action based framework are the way to go. You get complete control over the page, very few surprises, fast development, and ultimately faster page performance.
I've built apps with both JSF, and MS ASPX + DevExpress components. In the end I just want more control over what ends up on my page. jQuery is HUGELY popular, so there is no lack of JS talent in the market. Ajax is almost a no-brainer with jQuery too.
Also, for building database driven wep apps in Java nothing beats the speed of the Tagger Cat framework. It may be old school MVC, but it is seriously database focused, and nice to work with.
How do I repair malformed HTML using C#? A great answer would be an HTML Agility Pack sample!
I'm scraping a site (for legitimate use). The site's HTML is OK but there are some annoying problems.
One way I could go would be through regular expressions. I used Expression Web to analyse the problems and the regular expressions needed to correct them. So one way would be to use a tool such as RegexBuddy to generate C# code for these regular expressions.
However, the recommended tool for processing malformed HTML in C# is the HTML Agility Pack (HAP). Moreover, I've analysed only a handful of pages and I'm afraid that future pages will contain patterns I've not yet solved, and I would hate to enter the "find the errors in the next few pages and correct them" maintenance business. So, if HAP already has a solid, always-working solution, this would be great. The problem is that except for a few mentions here at SO I could not find any how-to-use documentation for this tool, except for the object-by-object API help file.
So - before I spend $ and learning time on RegexBuddy (no free evaluation version), or break my teeth on HAP's API documentation - is there an easy way to do this? An HAP sample would help... :-)
can you tell me what kind of annoying problems are you having?
but you dont need to use regex to clean the html, HAP will let you access the elemtents of a malformed html using Xpath Queries.
and basically you need to learn Xpath to know how to get the html elements you want.
it really depends on the kind of html you are parsing using HAP.
but there is several ways to get the elements.
like by id or class or even you can get the element that follows another element that contain a given text like "name:" for example.
you can goto W3 schools Xpath Tutorial for a nice xpath tutorial
What I took from the answers here:
1) If you're scraping a website you don't control, you'll always enter a maintenance mode where you have to fix your scraper every time the layout of the page you're scraping changes.
2) If you are limited to this known site, why not write your scraper to adjust the problems
So, if I have to go into maintenance mode, it should be as easy as possible. Therefore, my process is as follows:
I use Webius's SWExplorerAutomation to detect scenes in Web pages. The idea is that a Scene is a collection of conditions you define for IE. When a web page is loaded, IE tries to see which set of conditions is met (e.g. - page title is "Account Login", the page contains a "Login" text box a "Password" text box). If a set of conditions corresponding to a scene is detected, IE reports that the scene has been detected. This model provides an abstraction layer - Some changes in the web page can translate to changes in the scene file, saving the code from having to change. Additionally, this shields me from IE's event driven model: I call "scene. I'm evaluating this product but I'm not yet sure I'll use it, mainly because the documentation is terrible. Another alternative is Watin, and one more reason I haven't yet bought SWEA is this article accusing its author of spamming against Watin.
Once the web page has been acquired, I use Expression Web to run compatibility checks and identify errors.
I use RegexMagic to remove and correct errors. I really love this tool. Sure, sometimes it make you murderously angry because it doesn't let you do things that should be really easy, but it's a sweet, sweet tool, and the documentation is amazing.
Finally, after all the errors I know have been corrected, I use HTML Agility Pack to convert to XHTML - cross the ts and dot the is, so to speak: all lower case, quotes across attributes, and so on.
Hope this helps!
Avi
Regex can't be used for HTML Cleaning.
Does http://tidy.sourceforge.net/ helps?
If you're scraping a website you don't control, you'll always enter a maintenance mode where you have to fix your scraper every time the layout of the page you're scraping changes. It doesn't matter if you're using the regex <td color="red">\d+</td> to get the big red number from a page or if you're using a DOM parser to get the 3rd cell in the 2nd row in the table with id numbers to get the same. The regex breaks if the webmaster replaces the color attribute with a class attribute. The DOM parser breaks if the webmaster adds another row to the top of the table.
If you're scraping larger parts of a web page and want to embed them in your own web page, it may be easier to get over your desire for web standards compliance and just let the browser figure out how to display things.
Since you're using Html Agility Pack and know of the problems that occur, if you are limited to this known site, why not write your scraper to adjust the problems when you've loaded the HtmlDocument.
i.e.:
If you know the element always appears after the , insert the element into the first child position of the tag.....