In Access VBA, what are the major differences & issues between GroupLevel objects and Section objects..? I thought I understood, but then I got to a point in my code where I realized I didn't. I'm writing some code that automates the formatting of controls on a report that's open in design view. And while it's not normally necessary to know of Sections for this, the code does operate based on which section a control is in.
From looking at the help file, online, and the Locals window of the IDE in debug mode, it seems to me GroupLevel is the greater of the two...almost. A GroupLevel object represents a report group level, if there's any grouping. But then a Section object for a header, footer, both, or neither will stem from a GroupLevel object, based on its properties.
It's tempting to say that you can't have a Section without a GroupLevel, but there will always be a Detail section, even with no grouping. And moving up from there will be report header/footer, and page header/footer, and whether or not these exist determines whether or not a Section object exists for them. So I suppose it would be more accurate to say you can't have more than five Section objects without a GroupLevel for each of them.
And this is just what I've found for Reports. I haven't even dusted the snow off the tip of the iceberg for forms yet. Any insights & explanations in that vein would be most appreciated.
There's a closely related issue: since GroupLevel objects and Section objects don't have their own respective collections (such as "GroupLevels" or "Sections"), are there any .Count properties for them hidden elsewhere..? Or does a programmer simply have to follow the conceptual logic and iterate through .GroupLevel(n) or .Section(n) until a runtime error occurs to indicate "n" doesn't exist..?
Sections are the "physical" parts of the report/form: headers, footers, detail.
GroupLevels are logical, determined by your data and how you define Groups in the report.
Related
So I'm building a website that contains information about a bunch of different animal species. I will have a list of 500 items, that should be able to be filtered and sorted by different criteria. For example, I will have a 'country selection' option. If Brazil is selected, the Capuchin monkey among other animals (living in Brazil) should be added to the list.
I could see myself making a list with 50 species with no problem, as the HTML would be manageable. But would having 500 items in a list with filterabilty even be possible without using some sort of database?
I was thinking of just pairing animal items from the list with certain filter criteria. For example, Capuchin monkey with "Brazil", "Mammal", "Omnivore", etc.
And when e.g. "Mammal" is selected in the filter, all animals paired with that property (all mammals of the list) is added to the list, or if not paired with the property, then removed from the list.
As you probably can tell, I'm really uneducated on how to go about creating this filterable list. Down the road I might even look into adding a search function.
After pluggin in all content, I would never need to change anything. I've read that databases should only be used if you have dynamic content.
I wouldn't list all 500 items on the same page, as that would make it very slow. I would have 10 items per page.
I don't need a solution per se. I just wish to be pushed in the right direction.
Should I look into MySQL? Can a filterable list of 500 items be possible with just HTML/CSS/Javascript? I am somewhat familiar with javascript, and have read that JSON might be able to provide the things I need.
Sorry if my question is vague or if I'm in the wrong anywhere (this is my first post). Please ask for any clarification and any advice or suggestion is greatly appreciated.
Thanks,
Manne
No you don't need a database. Have a look at this very robust jQuery plugin that will easily allow you to sort/filter/search 500 items in JavaScript alone:
https://datatables.net/
There are examples that are powered from JSON alone so I would suggest you simply store your data in a JSON file until you grow large enough that you need to change that (if you ever do).
Here is an example where the data is pulled from a .txt file:
https://datatables.net/examples/data_sources/ajax.html
I've been researching this and haven't found much in terms of standard solutions for creating a 508-compliant, accessible org chart. We have images that represent organizational structure. It seems like the options would be to create an external file to link to that attempts to represent the relationships in the chart (although I'm not sure if there's a commonly accepted way to do this via text for a hierarchical tree), or maybe create an imagemap that doesn't actually link to anything externally but just exists for the labels. That seems much more of a hack. I also just thought of another potential representation - another html file (linked) that is basically just your standard list, which can represent a unlimited hierarchical complexity. Some labeled items are outside the general hierarchy (so groupings of various types within th hierarchy, etc.). Just wondering if anyone else had run into this, or just seen examples of how others have approached it?
Section 508 says, regarding web-based intranet and internet information and applications, which is probably what matters here: “(a) A text equivalent for every non-text element shall be provided (e.g., via "alt", "longdesc", or in element content).” Any solution that fulfills the requirement is 508 compliant. Note that this is a legal and formal matter; it does not imply that the content is really accessible.
So you can, for example, write a textual description of the organization (equivalent in content to the image) into the alt attribute. There is no defined upper limit on its length. Alternatively, you can use the longdesc attribute to link to a page containing an equivalent description, which may use all the expressive power of HTML, e.g. nested lists, or a table (which has accessibility requirements of course). Software support to longdesc is limited, if not anecdotal, but Section 508 explicitly mentions this possibility. Most sensibly, you can write a textual description, using HTML markup as needed, either in the page content (in which case you can use alt="") or on a separate page that you link to.
For a more specific answer, I think you need to ask a more specific question – like one with a real image representing an org chart.
I'm working toward a deadline that led me to this question more than five years after it was asked. Even now, if somebody hands me a visually presented org chart with no accessible fallback, Jukka's answer offers the best solution I can think of.
But what if we are part of the creation process (which is always the ideal), able to influence accessibility from the start? With well-structured semantic HTML, is it possible that no fallback will be needed? That's what I've gone in search of now, and here are a couple of resources that may be useful to someone in similar need. Both of these are licensed open source, which in both cases (using the MIT License) simply requires keeping the original copyright and license notice in the source code.
Here's a CSS solution proposed by Erin Sullivan.
And here's another that uses the Treeflex CSS library.
I always try to keep content separate from presentation, and CSS offers the possibility for continually customizing, refining and improving the presentation. I expect to use one of these in my current project, and I hope this research benefits others who are committed to better accessibility.
I've been lurking around for years reading up on all sorts of topics. Professionally I'm a Systems Interface Specialist/Interface Architect. I can work wonders with tcl, Cloverleaf, HL7, even Excel, but Microsoft Access eludes me. It repeatedly befuddles and confounds me. Everything that would seem simple and logical to me is neither of those things where MS Access is concerned.
So, I've come to you. Honestly, I'm not even sure I'll be able to put into the correct "technical" words what I want to do. I only know how I want things to appear to function when I'm finished.
I have built a very "simple" relational database to be used by authors who collect sentences or sentence snippets for use/inspiration in writing. There are three tables:
tblPhrases contains an autonumber field idxPhrases and a memo field Phrase.
tblTags contains an autonumber field idxTags and a memo field Tag.
tblTagsToPhrases contains an autonumber field idxTagsToPhrases and two number fields: Tags_index and Phrases_index
The first two tables require that all fields be unique.
Clearly (or not so?), the third table is the many-to-many connection. It allows for there to be many tags associated with a phrase and more than one phrase to be associated with any one tag.
I have figured out how to set up a form and subform but it looks clunky as you can see and it's not at all what I had imagined.
What I really want instead of the dropdown combo box in the keyword/tag subform -- which does populate from tblTags and when I select something, correctly populates tblTagsToPhrases and when I return to that Phrase in the form, displays the list of associated tags...
What I would really like instead is to have the values in tblTags appear in a "Tag Cloud" like on a webpage. Then I can click on the hypertext control for that value and it will populate a text field, adding commas between the selections. However, behind the scenes, it's really just adding to tblTagsToPhrases. I should also be able to type in a new Tag right there. Basically, treating the field like a Tag field on a bookmarking site.
And if I were really honest, I'd like to be able to display the phrases as hypertext too because that would look much less clunky and less like a database.
Can anyone give some direction to get from where I am to there.
Thanks so much in advance for any and all help!
You need VBA to do this, using the Timer event of the form to move labels around randomly, using some detailed maths to get them to move in a direction-of-flow, rather than completely random. Each label (or textbox) would first have to be populated with the field-values. If you only want a static tag-cloud then you wouldn't need the the Timer event.
If you don't know VBA then you'll have to search but I think it unlikely that you will find an example of this (and if you do, will you be able to adapt it?). In fact, if you get this to work, and include the animation, I would expect it to exhibit noticeable flicker.
I think you should ask yourself if you intend to build a database or a web-application. (If you are struggling with Access, why go even further and attempt to make it do things that it wasn't designed for?)
I have a query In our application we have lots of HTML tags. During development many tags were not given any id because of no requirement.Now the QA team wants to automate the test cases using QTP. In most of the cases this tool doesn't recognizes because it does not find ids for most of the HTML tags.Now we are asked to add ids to all the HTML tags.
I want to know if there will be any effect adding id attribute to these tags. Even positive impact are welcome
I do not think there will be any either positive or negative effect : maybe the size of the HTML page will increase a bit, but probably not that much.
Still, are you sure you need to put "id" attributes on every HTML tag of your pages ? Wouldn't only a few of those be enough ? Like on form fields, on links, on error-messages ; and that's probably about it ?
One thing you must take care, though, is that "id", as in "identifers", must be unique ; which implies it might be good, before starting adding them, to define some kind of "id-policy", to say, for instance, that "ids for elements of that kind should be named that way".
And, for your next projects : have developpers add those when theyr're developping ;-)
(And following the policy, of course)
Now that I'm thinking about it : a positive effect might be that it'll be easier to write Javascript code interacting with your HTML document -- but that'll be true for next projects or evolutions for this one, when those id are already present in the HTML at the time developpers put the JS code in place...
Since there are no QTP related answers yet.
GUI recognition in QTP is object-oriented. In order to identify an object QTP needs a unique combination of object's properties, and checking them better to be as fast as possible - that is why HTML ID would be ideal.
Now, where it is especially critical - for objects that do not have other unique identifiers. The most typical example - html tables. Their contents is dynamic, their number on the page may vary. By adding HTML ID you allow recognition mechanism get straight to the right table.
Objects with other unique properties can be recognized well without HTML ID. For example, if you have a single "submit" link on the page QTP will successfully recognize it by inner text.
So the context-specific answer: don't start adding ids to every single tag. Ask automation guys to prepare a list of objects they have problem with. And add ids to those objects.
PS. It also depends on automation programming skills. There are descriptive programming and dynamic recognition methods. They allow retrieving the right objects even without ids provided.
As Albert said, QTP doesn't rely solely on elements' id, in fact due to the fact that many web applications generate different ids for each session, (as far as I remember) the id property isn't part of the default description for most web test objects.
QTP is pretty good at recognizing most simple web controls and if you're facing problems it may be the case that a Web Extensibility project will help you bridge the gap between the semantics of your web application and the raw HTML it is created in. If a complex control is recognized by QTP as a WebElement (which is actually the div that contains the span that drives the code) you will understandably have object recognition problems since there are many divs on the page but probably many less complex controls.
If you are talking about side-effects - NO. Adding ids won't cause any problems (apart from taking up some extra bytes of course)
If you really have the need to add ids, go ahead and add them.
http://www.w3.org/TR/html4/struct/links.html#anchors-with-id says: The id and name attributes share the same name space. This means that they cannot both define an anchor with the same name in the same document. It is permissible to use both attributes to specify an element's unique identifier for the following elements: A, APPLET, FORM, FRAME, IFRAME, IMG, and MAP. When both attributes are used on a single element, their values must be identical.
What work, if any, has been done to automatically determine the most important data within an html document? As an example, think of your standard news/blog/magazine-style website, containing navigation (with submenu's possibly), ads, comments, and the prize - our article/blog/news-body.
How would you determine what information on a news/blog/magazine is the primary data in an automated fashion?
Note: Ideally, the method would work with well-formed markup, and terrible markup. Whether somebody uses paragraph tags to make paragraphs, or a series of breaks.
Readability does a decent job of exactly this.
It's open source and posted on Google Code.
UPDATE: I see (via HN) that someone has used Readability to mangle RSS feeds into a more useful format, automagically.
think of your standard news/blog/magazine-style website, containing navigation (with submenu's possibly), ads, comments, and the prize - our article/blog/news-body.
How would you determine what information on a news/blog/magazine is the primary data in an automated fashion?
I would probably try something like this:
open URL
read in all links to same website from that page
follow all links and build a DOM tree for each URL (HTML file)
this should help you come up with redundant contents (included templates and such)
compare DOM trees for all documents on same site (tree walking)
strip all redundant nodes (i.e. repeated, navigational markup, ads and such things)
try to identify similar nodes and strip if possible
find largest unique text blocks that are not to be found in other DOMs on that website (i.e. unique content)
add as candidate for further processing
This approach of doing it seems pretty promising because it would be fairly simple to do, but still have good potential to be adaptive, even to complex Web 2.0 pages that make excessive use of templates, because it would identify similiar HTML nodes in between all pages on the same website.
This could probably be further improved by simpling using a scoring system to keep track of DOM nodes that were previously identified to contain unique contents, so that these nodes are prioritized for other pages.
Sometimes there's a CSS Media section defined as 'Print.' It's intended use is for 'Click here to print this page' links. Usually people use it to strip a lot of the fluff and leave only the meat of the information.
http://www.w3.org/TR/CSS2/media.html
I would try to read this style, and then scrape whatever is left visible.
You can use support vector machines to do text classification. One idea is to break pages into different sections (say consider each structural element like a div is a document) and gather some properties of it and convert it to a vector. (As other people suggested this could be number of words, number of links, number of images more the better.)
First start with a large set of documents (100-1000) that you already choose which part is the main part. Then use this set to train your SVM.
And for each new document you just need to convert it to vector and pass it to SVM.
This vector model actually quite useful in text classification, and you do not need to use an SVM necessarily. You can use a simpler Bayesian model as well.
And if you are interested, you can find more details in Introduction to Information Retrieval. (Freely available online)
I think the most straightforward way would be to look for the largest block of text without markup. Then, once it's found, figure out the bounds of it and extract it. You'd probably want to exclude certain tags from "not markup" like links and images, depending on what you're targeting. If this will have an interface, maybe include a checkbox list of tags to exclude from the search.
You might also look for the lowest level in the DOM tree and figure out which of those elements is the largest, but that wouldn't work well on poorly written pages, as the dom tree is often broken on such pages. If you end up using this, I'd come up with some way to see if the browser has entered quirks mode before trying it.
You might also try using several of these checks, then coming up with a metric for deciding which is best. For example, still try to use my second option above, but give it's result a lower "rating" if the browser would enter quirks mode normally. Going with this would obviously impact performance.
I think a very effective algorithm for this might be, "Which DIV has the most text in it that contains few links?"
Seldom do ads have more than two or three sentences of text. Look at the right side of this page, for example.
The content area is almost always the area with the greatest width on the page.
I would probably start with Title and anything else in a Head tag, then filter down through heading tags in order (ie h1, h2, h3, etc.)... beyond that, I guess I would go in order, from top to bottom. Depending on how it's styled, it may be a safe bet to assume a page title would have an ID or a unique class.
I would look for sentences with punctuation. Menus, headers, footers etc. usually contains seperate words, but not sentences ending containing commas and ending in period or equivalent punctuation.
You could look for the first and last element containing sentences with punctuation, and take everything in between. Headers are a special case since they usually dont have punctuation either, but you can typically recognize them as Hn elements immediately before sentences.
While this is obviously not the answer, I would assume that the important content is located near the center of the styled page and usually consists of several blocks interrupted by headlines and such. The structure itself may be a give-away in the markup, too.
A diff between articles / posts / threads would be a good filter to find out what content distinguishes a particular page (obviously this would have to be augmented to filter out random crap like ads, "quote of the day"s or banners). The structure of the content may be very similar for multiple pages, so don't rely on structural differences too much.
Instapaper does a good job with this. You might want to check Marco Arment's blog for hints about how he did it.
Today most of the news/blogs websites are using a blogging platform.
So i would create a set of rules by which i would search for content.
By example two of the most popular blogging platforms are wordpress and Google Blogspot.
Wordpress posts are marked by:
<div class="entry">
...
</div>
Blogspot posts are marked by:
<div class="post-body">
...
</div>
If the search by css classes fails you could turn to the other solutions, identifying the biggest chunk of text and so on.
As Readability is not available anymore:
If you're only interested in the outcome, you use Readability's successor Mercury, a web service.
If you're interested in some code how this can be done and prefer JavaScript, then there is Mozilla's Readability.js, which is used for Firefox's Reader View.
If you prefer Java, you can take a look at Crux, which does also pretty good job.
Or if Kotlin is more your language, then you can take a look at Readability4J, a port of above's Readability.js.