"Merged hierarchies can be used in reports (New)" New Feature in BO 4.1 - business-objects

Firstly, I would like to say that I am dissapointed when it comes to documentation for SAP BusinessObjects (At present we are using vesion 4.1). I can't believe that such system has such unuseful if any documentation/tutorials that cover real life examples. This is really discouraging.
Now I am comming to my question: In version 4.1. it is stated that the new version offers following feature: "Merged hierarchies can be used in reports (New)". The question is - is it possible to implement collapse/expand functionality in 4.1 version in the same way as it is displayed in the following video: https://www.youtube.com/watch?v=NEAhfX2Bqc8 (starting from 2:08 minutes). If yes - could anyone please explain how to implement the functionality or send a video tutorial that tacles with this issue?
(I assumed that the mentioned new feature is the same as the feature shown in the video, which is implemented with BEX queries. We are not using BEX queries, but our data basis is database that resides on SQL server.)
Thanks!

Hierarchies are specific to OLAP environments (e.g. SQL Server Analysis Services or SAP BW through BICS). If you're using a relational database such as SQL Server, there is no such thing as a hierarchy.
Do not confuse this with navigation paths (Information Design Tool - IDT) or object hierarchies (Universe Design Tool – UDT).
From the documentation:
A navigation path is an object that defines the drill path used in SAP
BusinessObjects reporting tools. A drill path is a list of drillable
business objects that allow a report analyst to drill down on a
dimension.
Thus, a navigation path or object hierarchy is only used to define drill paths in your document, not to hierarchically define your data in a given dimension.
More information about this:
Navigation paths: Information Design Tool User Guide, paragraph 12.14: About navigation paths for objects
Object hierarchies: Universe Designer, page 364: Defining hierarchies
If you're looking for documentation on SAP BusinessObjects, try these resources:
Analytics Knowledge Center
Official Product Tutorials – SAP BI Suite

Related

Is there a Simple OSLC Metamodel Showing Entities and Relationships?

There seems to be any amount of RDF-format for the OSLC but what I'm looking for is a simple E-R-like view of the OSLC metamodel which shows the concepts and relationships which can be used to understand the organisation and possible queries.
Is there a (graphic) representation of the OSLC metamodel anywhere?
If you are after a simple graphical diagram, you can find UML models under this Lyo-Docs
repo. you can find the source .emx files, as well as .png snapshots under the folder "OSLC-V2/images/".
I you are developing OSLC applications, you might want to consider the modelling tool Lyo Designer.
There, you can find a graphical model of the OSLC Core and Domain concepts. The models are based on a OSLC-specific modelling language. Lyo Designer allows you define/extend your own models, from which you can generate an OSLC application, based on the Eclipse Lyo SDK.
I here assume you are aware of the java class implementations of the OSLC Core concepts in Eclipse Lyo. There is also an implementation of the domain specifications.

what does WTX (websphere transformation extender)

I'm not finding a lot of information on the internet about what is and what does WTX.
Can you give me some lights about it, and give me an example?
Also, the EAI BizTalk Server from microsoft is related to wtx?
thanks
WTX is a data transformation tool, for exmaple it will transform CSV into XML, one flavour of XML (e.g. RosettaNet) into another (e.g. an application format) etc.
It is owned and developed by IBM.
Transformations are referred to as maps and are built by dragging and dropping fields in the Eclipse based design tool.
The WTX runtime engine can be called in a variety of ways - e.g. through the Java API, the TX Launcher, IBM Integration Bus, etc.
WTX has been previously known as Mercator and DataStage TX
EAI BizTalk Server is also a tool for integrating applications and transforming data. I'm not aware of any adapters between the two tools, but you could develop one using the WTX API.
WTX is the one of the powerful data transformation tool present in the integration industry .
Generally XML data transformation is done in may ways like XSLT ect . but when it comes to EDI , stander Messages like EDFACT ASCII transformation logic will be very difficult in other tools .
WTX can easily handle these type of transformation and its very flexible tool for transformation for any type of message.
EX: Lets consider two systems communicating and system A send data in IDOC , systmem B wants in XML . You can use a WTX map in between and achieve this . Also you can handle business rules in WTX.

SSRS 2008 - Create a chart of a directed graph to visualise ETL jobs

I can't find anything that hints towards native support for charting graph data structures (otherwise known as "network maps" by some), and in my case, a directed graph. I'm wanting to create a visualisation of our ETL dependency chain at work to show the steps that each different 'job' is reliant on before being able to proceed.
Questions:
Has anybody been able to 'simulate\hack\workaround' this lack of out-of-the-box functionality in SSRS?
Any ideas on how to possibly achieve this if no-one has thought of doing this before?
EDIT - 2014-10-30
Two years and no answer so I've accepted the most promising advice on a workaround to get what is needed, as no direct functionality has been found.
From left field:
You could wrap an SSIS package around your "ETL jobs". The SSIS Control Flow surface has a GUI for expressing task dependancies. It's functional if not not visually outstanding. Your "ETL jobs" could be Execute SQL Task or Execute Process Task objects. You can connect the precedence constraints to show dependancies.
This could either be for real use or just for documentation purposes. If you use it for real you'll find its a great way to control ETL dependancies and parallelism.

What is relationship between GDAL, FDO and OGR?

Their documentations are simple and professional.
But they don't mention too much about the relationship between these open source projects.
When should I use which one? And which one is suitable for what scenario?
If you are a GIS developer who is familiar with these projects, can you explain?
The fundamental common denominator of all of the three software packages is that they all are data access abstractions. In particular, they provide access to geospatial data. In general, they all follow similar convention:
- define collection of types and objects
- define low-level data sources implemented in form of a set of drivers (as named in GDAL/OGR) or providers (as named in FDO)
FDO, GDAL and OGR are all implemented in C++ programming language.
Along with similarities, there are many differences. GDAL/OGR gives access to data stored in enormous number of geospatial formats, lots of data processing algorithms and operators. FDO provides those features too (interestingly, thanks to integration with GDAL/OGR in some places like FDO Provider for GDAL) but it feels more like a framework, whereas GDAL/OGR feels more like a library.
Anyhow, it is not possible to provide you with definitive answer which one fits where better.
You may find Matthew Perry's blog and following discussion helpful: FDO, GDAL/OGR and FME?
Note, GDAL and OGR are bundled together under the umbrella of common software project called simply GDAL. Both names seem to be acronyms and are explained in GDAL FAQ, check the following Q&A:
What is GDAL?
What does GDAL stands for?
What is this OGR stuff?
What does OGR stands for?
In basic terms, GDAL is used for reading, writing and transforming raster data, while OGR can do the same with vector data. I am not as familiar with FDO, but it appears to be an API used to access (from database sources), manipulate and analyze all kinds of geospatial data, and relies on GDAL and OGR for those purposes.

Best practices for version information?

I am currently working on automating/improving the release process for packaging my shop's entire product. Currently the product is a combination of:
Java server-side codebase
XML configuration and application files
Shell and batch scripts for administrators
Statically served HTML pages
and some other stuff, but that's most of it
All or most of which have various versioning information contained in them, used for varying purposes. Part of the release packaging process involves doing a lot of finding, grep'ing and sed'ing (in scripts) to update the information. This glue that packages the product seems to have been cobbled together in an organic, just-in-time manner, and is pretty horrible to maintain. For example, some Java methods create Date objects for the time of release, the arguments for which are updated by a textual replacement, without compiler validation... just, urgh.
I'm trying avoid giving examples of actual software used (i.e. CVS, SVN, ant, etc.) because I'd like to avoid the "use xyz's feature to do this" and concentrate more on general practices. I'd like to blame shoddy design for the problem, but if I had to start again, still using varying technologies, I'd be unsure how best to go about handling this, beyond laying down conventions.
My questions is, are there any best practices or hints and tips for maintaining and updating versioning information across different technologies, filetypes, platforms and version control systems?
Create a properties file that contains the version number and have all of the different components reference the properties file
Java files can reference the properties through
XML can use includes?
HTML can use a JavaScript to write the version number from the properties in the HTML
Shell scripts can read in the file
Indeed, to complete Craig Angus's answer, the rule of thumb here should be to not include any meta-informations in your normal delivery files, but to report those meta-data (version number, release date, and so on) into one special file -- included in the release --.
That helps when you use one VCS (Version Control System) tool from the development to homologation to pre-production.
That means whenever you load a workspace (either for developing, or for testing or for preparing a release into production), it is the versionning tool which gives you all the details.
When you prepare a delivery (a set of packaged files), you should ask that VCS tool about every meta-information you want to keep, and write them in a special file itself included into the said set of files.
That delivery should be packaged in an external directory (outside any workspace) and:
copied to a shared directory (or a maven repository) if it is a non-official release (but just a quick packaging for helping the team next door who is waiting for your delivery). That way you can make 10 or 20 delivers a day, it does not matter: they are easily disposable.
imported into the VCS in order to serve as official deliveries, and in order to be deployed easily since all you need is to ask the versionning tool for the right version of the right deliver, and you can begin to deploy it.
Note: I just described a release management process mostly used for many inter-dependant projects. For one small single project, you can skip the import in the VCS tool and store your deliveries elsewhere.
In addition to Craig Angus' ones include the version of tools used.