How to take into account empty groups in Report Builder - reporting-services

I have asset conditions that I am graphing for my Council using Microsoft Report Builder and I can’t figure out a way to show categories that have no data against them. So below for instance, I have two examples of graphs of different asset types. The top graph has all of the categories I require (Excellent, Good, Fair, Poor and Failed), but the bottom graph is missing any examples of Failed assets. Is there any way that I can specify what categories must be included, so that blank ones are covered?
Example images:

Related

How to surface different Workshop pages to different user groups?

I have a Workshop module that addresses different user groups. Hence I would like to surface different pages to different groups by default. Indeed I see an option to control the default page selection based on a variable.
My first thought was to split my users into different Multipass groups and then have a function that queries a given user's Multipass attributes for membership in certain groups. However, I don't seem to be able to check for group membership in this way, probably for security reasons.
What would be the recommended way to go about this?
The Foundry security primitives for resource visibility (as opposed to data visibility) are largely aligned at the resource level rather than within a given resource. (The one exception I know of that's relevant is within the Object View configuration, where you can set visibility on different Tabs).
An approach also depends on if the resource visibility is a matter of permissions (i.e. should a user outside a given group not see a given page - again separate from the permission to see any data within that page) or one of convenience (i.e. all users can see all the data and all the interfaces, but each given group should simply start in a separate place.
In the former case, (i.e. security) I think it'd be best to make a separate Workshop app for each team and then maybe wrap them all into a Carbon workspace. The resource visibility, configured as the actual resource permissions in Compass, should determine if it appears in the Carbon workspace for the user.
If it's just for convenience, you could build all the pages in a single Workshop app, then make a separate Carbon workspace for each team and set a parameter to determine the default page, as you mentioned.

Get categories from Wikipedia:Vital articles

I'm trying to get a "category tree" from wikipedia for a project I'm working on. The problem is I only want more common topics and fields of study, so the larger dumps I've been able to find have way too many peripheral articles included.
I recently found the vital articles pages which seem to be a collection of exactly what I'm looking for. Unfortunately I don't really know how to extract the information from those pages or to filter the larger dumps to only include those categories and articles.
To be explicit, my question is: given a vital article level (say level 4), how can I extract the tree of categories and article names for a given list e.g. People, Arts, Physical sciences etc. into a csv or similar file that I can then import into another program. I don't need the actual content of the articles, just the name (and ideally the reference to the article to get more information at a later point).
I'm also open to suggestions about how to better accomplish this task.
Thanks!
Did you use PetScan?. It's wikimedia based tool that allow extract data from pages based on some conditions.
You can achieve your goal by go the tool, then navigate to "Templates&links" tab, then type the page name in field "Linked from All of these pages:", e.g. Wikipedia:Vital_articles/Level/4/History. If you want to add more than one page in the textarea, just type it line by line.
Finally, press Do it! button, and the data will be generated. After that you can download the data from output tab.

MediaWiki API: Get all pages on sublists of lists on Wikipedia?

I am writing an application that needs lists of Wikipedia page tiles within a certain category. Some categories work really well for this. For example Category:English-language_films is a category which is attributed to about 60k pages. Using the MediaWiki's API I can query with the list=categorymembers, I can get a list of all 60k films.
However this works much less well with something like hockey players in the NHL. Category:Lists_of_National_Hockey_League_players is about as close as a category gets but this is a category of list pages. It turns out that the concept of NHL players is stored in lists, not categories. Where the concept of English-language films is stored as a category.
It's rather difficult to obtain the actual list, simply because these lists themselves are broken up into several sub lists by alphabet or team. It's theoretically possible to screen scrape the data, but simply getting the list of Wikipedia pages linked from that page is error prone.
Is there a straight-forward way to get pages that are listed by lists, including expanding sub lists using the API or some way to tell from the content of a list whether a link is a member of the list or just meta data about the member of the list?
When there is a category of list of things, chances are there will be a category of things as well. In your case that would be Category:National Hockey League players. You can walk that recursively with the categorymembers API. (Unlike lists, categories can't contain red links so depending on your use case that might be a problem.)
Other than that, Wikipedia APIs won't be much help. You can check Wikidata for something appropriate (e.g. data items with the NHL.com player ID property); that's a different data set but sometimes it is kept in sync, and always easy to query. If that's not appropriate, you'll have to scrape the HTML.

Error 10008 OneNote API

I have a OneNote notebook that is shared in a OneDrive library. When trying to get the sections via the REST API, I get the 10008 error message explaining that I have more than 5000 items and the query cannot be completed. I know that this notebook has far less than 5000 sections, but the OneDrive library has more than 5000 items.
My query is as follows:
https://www.onenote.com/api/v1.0/users/{user id}/notes/notebooks/{notebook id}/sections
I would have expected this kind of error if I was expecting to return 5000+ items, but in this case, I'm expecting somewhere in the neighborhood of 10-20 sections.
I have two questions I'd like answered by the OneNote product group:
Is there a way around this without moving the notebook?
Can I get an explanation as to why this is necessary?
Is there a way around this without moving the notebook?
Splitting notebooks across multiple list should solve this problem. You would like to make sure that any list doesn’t contain more than 5000 notebooks or sections
Can I get an explanation as to why this is necessary?
Given notebooks contains only 10-20 sections, however the SharePoint indexing mechanism considers all the sections available in List while filtering sections for given notebook and hence API will fail with this error message when your list contains more than 5000 notebooks or sections

MediaWiki extension to support taxonomy by genus and species

I'm trying to build a MediaWiki-based website for a very specific purpose. Namely, I would like to create a field guide for a specific group of animals (reptiles and amphibians). Since the people I would want to generate content on the website aren't necessarily techies, I'd like to make things as easy and painless as possible for contributors.
Now, in most groups of animals, taxonomic designations are fluid, and change all the time. As an example, consider the following:
A species used to be called Genus1 species1. It was then called Genus2 species1. As of now, this species has been split into several species, say Genus2 species1, Genus2 species2, Genus2 species3, etc. In the worst case, anything about the nomenclature and classification of the species could change, including, but not limited to, the species being moved, split or merged with any other species.
For users, these changes should be transparent. That is, on typing in http://url_of_wiki/wiki/Genus1_species1, they should automatically be redirected to the lowest taxonomic group (in this case Genus2) that is non-ambiguous. Essentially, if a page is redesignated (moved, split or merged), I would like to automatically create all new pages and redirects required.
I should be able to implement this as an extension quite easily. However, I've read the MediaWiki documentation on extensions, but haven't been able to figure out just what part of MediaWiki it would be best to target.
So, the question is, is this type of extension best implemented as a parser extension, by adding new tags, or a user-interface extension, or a combination of the two (a user-interface extension backed by a parser extension)?
Nice challenging problem! If it were up to me I would solve it in a different way:
use page level for genera and
sub page level for species.
This will automatically take care of renaming since redirects will be made.
Alternatively:
- use page level for species and
- categories for genera.
Then use an if pagename template (see Wikipedia example) to change the category based on the page name.
Or possibly combine these methods.
(See also Wikis and Wikipedia)