Tool support for MOF (Meta Object Facility) - metamodel

I was looking into OMG's Business Process Definition MetaModel (BPDM) and found the meta-model definition as XMI/CMOF(Complete MOF) file (download here).
Now I was looking for any tools that support reading, editing and displaying the MOF file, but I could not find any. I only found out the Eclipse's ECore is somewhat based on Essential MOF (EMOF), but I could not exploit this relationship.
Do you know any tools, libraries, scripts with MOF support that could handle the mentioned file?

Marius already provided a good solution.
However, just in case you find it useful, there is a tool by EmPowerTech that can serve the purpose of exploring the UML metamodel.
http://www.empowertec.de/products/uml-metamodel-viewer

At least by using google I find several tools for accessing MOF based files:
Programmer friendly:
http://www2.informatik.hu-berlin.de/sam/meta-tools/aMOF2.0forJava/tool.html
Modeling friendly:
http://www.magicdraw.com/newandnoteworthy/magicdraw/16.0#import_export

Related

Do current CKAN instances in the field support the JSON-LD DCAT format?

I'm working on a graduate course project to develop a query client for CKAN and DCAT catalogs. I've read a lot of documentation and specs, yet a lot of things seem to still be proposals so I figured I needed to reach out to ask someone who knows.
The Project Open Data site discusses the DCAT format to be a JSON-LD based format with a particular schema. The schema makes sense but there is a lot of push in my class around targeting US federal government data from data.gov, which runs CKAN (as many of these data sharing systems do according to my research). Everywhere I'm looking, people are suggesting that CKAN supports DCAT, but I'm just not finding that.
For instance, http://catalog.data.gov/api/3/action/package_show?id=national-stock-number-extract shows a completely different JSON format. It appears to have values that could be used to translate to a JSON-LD DCAT object.
The following properties are in the DCAT schema, but most of the document doesn't conform. It just looks like this is something of a translation to JSON-LD DCAT.
{
key: "bureauCode",
value: [
"007:15"
]
},
{
key: "accrualPeriodicity",
value: "R/PT1S"
},
{
key: "spatial",
value: "National and International"
}
Then I came across this page which shows the expected format I'm looking for, but it says that it's a proposal. Is this still accurate? In the case of data.org, I can simply append .rdf to the end of a dataset URI (one of the features the proposal mentions) and it produces an RDF XML document using DCAT vocabulary. But the same data set accessed via the CKAN API doesn't provide the same functionality.
For instance.
http://catalog.data.gov/dataset/housing-affordability-data-system-hads -> page
http://catalog.data.gov/dataset/housing-affordability-data-system-hads.rdf -> rdf xml
http://catalog.data.gov/api/3/action/package_show?id=housing-affordability-data-system-hads -> CKAN's JSON format
http://catalog.data.gov/api/3/action/package_show?id=housing-affordability-data-system-hads.rdf -> NOT FOUND
So what is the deal exactly? I see that the plugin for DCAT is in development, but has it just not been finished and integrated into CKAN for production?
Support for DCAT is not part of CKAN core, there is however the ckanext-dcat extensions. It is currently still "work in progress", so it's not yet finished.
If you have specific needs that are not yet implemented, you might want to fork the repo and add those features.
I know that the Swedish portal Öppnadata.se uses the ckanext-sweden, which customizes ckanext-dcat to some extend.
The specification that you found really seems outdated, but I couldn't find anything better myself. And I guess it's also the basis for the ckanext-dcat extension.
All that said, this is not first-hand information. I will soon start developing a DCAT based catalogue, and actually tried to answer the questions you posed some time ago. My answer above reflects what I found out until now :)
I think you're mixing up a few things. DCAT is an RDF-vocabulary defined by W3C, this means it is standardised way to describe open data using RDF. RDF is a data model, which has different formats: rdf+xml, turtle, n3, json-ld,... This means I can represent the same information in both JSON or XML.
Like Odi mentioned, CKAN does not support DCAT out of the box, it needs to be installed as a plugin.
Coming to your question now. The api link you mentioned is just that, an api for CKAN. It has nothing to do with DCAT. The information revealed by the API is similar to DCAT, because they both describe the information of the datasets. The easiest way to find what is available by the CKAN instance is to look for a link in the html source of a dataset page.
Example taken from the online demo which links to the turtle DCAT feed: <link rel="alternate" type="text/ttl" href="http://demo.ckan.org/dataset/a83cf982-723f-4859-8c1c-0518f9fd1600.ttl"/>
JSON isn't a popular format for exposing DCAT, but you should be able to find RDF libraries that can read the other formats.

What is relationship between GDAL, FDO and OGR?

Their documentations are simple and professional.
But they don't mention too much about the relationship between these open source projects.
When should I use which one? And which one is suitable for what scenario?
If you are a GIS developer who is familiar with these projects, can you explain?
The fundamental common denominator of all of the three software packages is that they all are data access abstractions. In particular, they provide access to geospatial data. In general, they all follow similar convention:
- define collection of types and objects
- define low-level data sources implemented in form of a set of drivers (as named in GDAL/OGR) or providers (as named in FDO)
FDO, GDAL and OGR are all implemented in C++ programming language.
Along with similarities, there are many differences. GDAL/OGR gives access to data stored in enormous number of geospatial formats, lots of data processing algorithms and operators. FDO provides those features too (interestingly, thanks to integration with GDAL/OGR in some places like FDO Provider for GDAL) but it feels more like a framework, whereas GDAL/OGR feels more like a library.
Anyhow, it is not possible to provide you with definitive answer which one fits where better.
You may find Matthew Perry's blog and following discussion helpful: FDO, GDAL/OGR and FME?
Note, GDAL and OGR are bundled together under the umbrella of common software project called simply GDAL. Both names seem to be acronyms and are explained in GDAL FAQ, check the following Q&A:
What is GDAL?
What does GDAL stands for?
What is this OGR stuff?
What does OGR stands for?
In basic terms, GDAL is used for reading, writing and transforming raster data, while OGR can do the same with vector data. I am not as familiar with FDO, but it appears to be an API used to access (from database sources), manipulate and analyze all kinds of geospatial data, and relies on GDAL and OGR for those purposes.

Mercurial API and Extensions resources

I want to write extensions for Mercurial. What are good resources such as tutorials, guides, API reference or maybe even a existing extension that is well commented and easy to lean from the source.
So far, I have only found the short MercurialApi and WritingExtensions wiki pages.
Mercurial The Definitive Guide, also known as the hg book, contains a section on writing extensions for Mercurial. The book is available to view for free at http://hgbook.red-bean.com/.
Edit: My apologies, the hg book did only describe using extensions not writing them. The section on writing hooks in the book may still be useful though.
The best way to learn how to write an extension is probably going to be reading extension code. Focus the most attention on extensions that perform functions similar to what you want to implement.
e.g. If your interested in converting from one SCM system to another take a look at the hg-git extension.
As far as I know, there isn't a lot in the way of 'learning materials' for writing extensions. Your best bet is probably to find an extension that does something similar to the one you want to write, read the source and figure out how it works. You can try contacting that extension's author if you get stuck.

Mercurial: How to manage common/shared code

I'm using Mercurial for personal use and am conteplating it for some distributed projects as an alternative to SVN for various reasons.
I'm getting comfortable with using it for self contained projects and can see various options for sharing however I haven't yet found any guidance on managing common libraries to be included in multiple projects in a similar manner to that provided by externals in subversion.
The most obvious shared lump of code is error handling and reporting - we want this to be pretty much the same in all projects (its fairly well evolved). There is also utility code, control libraries and similar that we find better to have as projects built with each solution than to pull in as compiled classes (not least because it ensures they are kept up to date, continuous integration helps us address breaking changes).
Thoughts (I hate open ended questions, but I want to know what, if anything, others are doing).
Mercurial 1.3 now includes nested repository support, which can be used to express dependencies. The other option is to let your build system handle the download and tracking of dependencies using something like ivy or maven though those are more focused on pulling down compiled code.
The world has changed since I asked that question and the solution I now use is different.
The simple answer is now to use packages (specifically NuGet as I do .NET) to deliver the common code instead of nesting repos and including the projects in a solution.
So I have common code built into NuGet packages by and hosted using TeamCity and where previously I would have an external and include the project/source I would now just reference the package.
Use the Forest Extension it emulates svn externals for HG, to some extent that is.
Subrepository (with good guide) or Guestrepo "to overcome ... limitations" (of subrepos) is today's language-agnostic answer

Studying standard library sources

How does one study open-source libraries code, particularly standard libraries?
The code base is often vast and hard to navigate. How to find some function or class definition?
Do I search through downloaded source files?
Do I need cvs/svn for that?
Maybe web-search?
Should I just know the structure of the standard library?
Is there any reference on it?
Or do some IDEs have such features? Or some other tools?
How to do it effectively without one?
What are the best practices of doing this in any open-source libraries?
Is there any convention of how are sources manipulated on Linux/Unix systems?
What are the differences for specific programming languages?
Broad presentation of the subject is highly encouraged.
I mark this 'community wiki' so everyone can rephrase and expand my awkward formulations!
Update: Probably didn't express the problem clear enough. What I want to, is to view just the source code of some specific library class or function. And the problem is mostly about work organization and usability - how do I navigate in the huge pile of sources to find the thing, maybe there are specific tools or approaches? It feels like there should've long existed some solution(s) for that.
One thing to note is that standard libraries are sometimes (often?) optimized more than is good for most production code.
Because they are widely used, they have to perform well over a wide variety of conditions, and may be full of clever tricks and special logic for corner cases.
Maybe they are not the best thing to study as a beginner.
Just a thought.
Well, I think that it's insane to just site down and read a library's code. My approach is to search whenever I come across the need to implement something by myself and then study the way that it's implemented in those libraries.
And there's also allot of projects/libraries with excellent documentation, which I find more important to read than the code. In Unix based systems you often find valuable information in the man pages.
Wow, that's a big question.
The short answer: it depends.
The long answer:
Some libraries provide documentation while others don't. Standard libraries are usually pretty well documented, whether your chosen implementation of the library includes documentation or not. For instance you may have found an implementation of the c standard library without documentation but the c standard has been around long enough that there are hundreds of good reference books available. Documentation with hyperlinks is a very useful way to learn a new API. In any case the first place I would look is the library's main website
For less well known libraries lacking documentation I find two different approaches very helpful.
First is a doc generator. Nearly every language I know of has one. It basically parses an source tree and creates documentation (usually as html or xml) which can be used to learn a library. Some use specially formatted comments in the code to create more complete documentation. JavaDoc is one good example of this. Doc generators for many other languages borrow from JavaDoc.
Second an IDE with a class browser. These act as a sort of on the fly documentation. Some display just the library's interface. Other's include description comments from the library's source.
Both of these will require access to the libraries source (which will come in handy if you intend actually use a library).
Many of these tools and techniques work equally well for closed/proprietary libraries.
The standard Java libraries' source code is available. For a beginning Java programmer these can be a great read. Especially the Collections framework is a good place to start. Take for instance the implementation of ArrayList and learn how you can implement a resizeable array in Java. Most of the source has even useful comments.
The best parts to read are probably whose purpose you can understand immediately. Start with the easy pieces and try to follow all the steps that are hidden behind that single call you make from your own code.
Something I do from time to time :
apt-get source foo
Then new C++ project (or whatever) in Eclipse and import.
=> Wow ! Browsable ! (use F3)