Feasibility of Integration of custom MDF to EmployeeTimeSheetEntry via HCI - integration

I want to leverage the functionality of Employee Central Time-Off for showing punch-in & punch-out time of an employee. Is this possible to implement the mapping of punch-in and punch-out time fields from custom MDF say cust_Clock_time to the fields of EmployeeTimeSheetEntry via HCI?
In other words, it would be outbound integration of SF with HCI from MDF perspective but it would be inbound of SF with HCI from EmployeeTimeSheetEntry's perspective.
(1) Mapping Fields Sheet:
(2) IFLOW after running traces

As far as I've understood, you would achieve this in a REAL EASY manner with SF Integration Center.
Source: SF, Target: SF, nice-and-easy WYSIWYG Mapping Editor...

Related

Detect data anomalies in data pipe and trigger scheduled datapipeline

In Foundry, we have a data pipeline where we want to insert a code node (repo or workbook) that detects anomalies and then sends and email or some other alert about the problem.
Having trouble finding this in the documentation, can someone point me to it?
Ideally we would love to have the code trigger the Scheduler to do a pipeline run to create a REPORT, (maybe even Quiver, to do some timeline analysis). Is this possible? Are there examples in the documentation?
Check out the documentation in the Data Health section of the platform documentation. There are a number of patterns possible, including defining data expectations in your code.
Whether defined as expectations or dataset health checks, failures can be set up to create Issues within the platform, which can have default assignees (individuals or groups) that will also send notifications, which are both in platform and over email (depending on per-user configuration).
Health check failures will also automatically populate the data health tab in the Project Catalog view, which can serve as a dashboard to view the overall health of the project. You can also surface these in the Data Lineage view with a coloring based on Data Health to understand issues across the breadth of the pipeline.
For a comprehensive approach to pipeline health, review the Pipelines and best practices section in the Code Repositories documentation.

Programmatically Recomputing Precise Part Volume From Third-Party Files Using Forge APIs

I'm looking for best practices and performance-guided recommendations for recomputing a model's volume when it's missing from the source file. This is in the context of a web application I am working to build that enables:
Uploading 3D models in a variety of file formats
Interacting with these models using the AutoDesk Viewer
Displaying mass properties, eg volume and surface area, alongside the viewer (subject of this post)
Background
Some file formats have very reliable volume information that is computed and written to the file by the authoring application. For these files, we can access volume as a property via AutoDesk Viewer.
Other formats, however, do not carry volume information - at least not in a manner that is openly accessible using tools other than the authoring application (prime example here is SolidWorks). This leaves us with a giant gap to fill - we need to recompute the model's volume using what's in the file.
Known Workarounds and Options
AutoDesk published a blog post detailing an approach for approximating model volume using triangles of the model inside the viewer. I think it's an ideal solution for use cases that can afford to trade accuracy for a bump in performance - and it centers everything in the viewer making development and subsequent maintenance simpler. This application, however, cannot rely on such approximations. I'm left reviewing options for leveraging the AutoDesk Design Automation API to:
Spin up an instance of Inventor
Load the model file
Rely on iLogic to trigger a re-computation of the model's part properties (perhaps like this?)
Push that data back to my web application
Where I Need Help
My understanding is that an AppBundle and Activity are defined ahead of time and then every uploaded model would be submitted as a work item.
I am hoping for guidance in:
whether this is the only approach or whether there are other options worth considering
how best to orchestrate the end-to-end process from an order of operations/workflow standpoint to maximize performance
Current Thinking
For example, I'm thinking that my first step after the source file is uploaded is to immediately initialize two parallel processes: the first to translate the source file for the viewer, the second to spin up Inventor and trigger the related downstream process to get volume.
The other option I'm considering is handling all of the work in Inventor - and pushing out an SVF file to the viewer that's enriched with volume data. The advantage of this approach is that my frontend will have only one source for volume data, (it will be in the enriched SVF no matter whether it was supplied in the original file or not).
In an ideal world I'd be able to only invoke the Design Automation API when volume data is missing from the source file - but I'd only know that after translating the file and bringing it back to the viewer. Given that many of our files are created in SolidWorks and other high-end proprietary CAD platforms, my working hypothesis is that we'll be needing to fill in volume gaps more often than not.
Your understanding is correct:
appbundle is simply a collection of files (binaries, data) encapsulating a specific Inventor/Revit/3dsMax/AutoCad plugin
activity is a kind of a job template specifying which application should be invoked, which appbundle should be loaded into the application, what inputs will be provided to the job, and what outputs will be generated
work item is then a specific instance of a job, binding the activity inputs and outputs to specific URLs
There is currently no other way to access the Design Automation functionality other then using these 3 types of entities.
I would suggest the following:
wherever possible, use the Design Automation for Inventor to compute the precise areas/volumes
for file formats that cannot be imported into Inventor or any other Design Automation engine, you could use tools like https://github.com/petrbroz/forge-convert-utils to parse the SVF and compute (a very rough estimate of) the area/surface from the triangular meshes; however, this will be quite computationally expensive, and imprecise

SAP retail functionality in ECC

I'm currently dabbling with SAP ECC, configuring a solution for an apparel company. I am implementing a solution where inventory is dispatched to retail outlets. Any clues as to which modules and transaction code within those modules I should be looking at?
You could use SAP module MM for creating, changing and displaying materials/commodities (e.g. TC MM01, IH09 etc), creating inventories, inventory lists (e.g. TC MI01) and maybe (depending on your/your customer's actual needs) SAP module SD for sales orders, billing and delivery.
For an apparel company you will likely need:
MM: Material Management
SD: Sales and Distribution
PP: Production Planning
FI: Finance
as a baseline only. This doesn't take into account any of the necessary methods needed to integrate with suppliers or the customers of the company itself.
You can use SAP Retail Industry Solution which supposedly should contain most of the functionality you'd probably need.
It can be installed as an addon module or activated through business function switches (SFW5 tcode).
More about switch framework here:
https://sapinsider.wispubs.com/Assets/Articles/2008/October/Industry-Solutions-Are-Now-Integrated-Into-The-SAP-ERP-Core-How-The-Switch-And-Enhancement-Framework

What "other features" could be incorporated into a train database?

This is a mini project for DBMS course. My task is to develop a Database for management of passenger trains.
I'm designing tables for Customers, Trains, Ticket Booking (via Telephone & Internet), Origins and Destinations.
He said, we are free to incorporate other features in our Database Model. Some of the features that we can include are as listed:
Ad-hoc Querying
Data Mining
Demographic Passenger Mapping
Origin and Destination Mapping
I've no clue about what these features mean. I know about datamining but unable to apply it in this context. Can any one kindly expand these features or suggest new ideas?
EDIT: What is Ad-hoc Querying? Give an example in this context.
Data mining would incorporate extracting useful facts/figures out of the data obtained by your system & stored in the database. For example, data mining might discover that trains between city x and y are always 5 minutes late, or is never at more than 50% capacity, etc. So you may wish to develop some tools or scripts that automatically run and generate statistics (graphs are best) which display this information and highlight unusual trends. In the given example, the schedulers could then analyse why the trains are always late (e.g., maybe the train speedos are wrong?).
Both points 3. and 4. are a subset of data mining imo. There is a huge amount of metrics you could try to measure, it is just really whatever you can think of. If you specify what type of data you are going to collect, that will make making suggestions easier.
Basically, data mining just means "sort the data to find interesting facts".
Based on comment below you could look for,
% of internet vs. phone sales
popular destinations & origins
customers age/sex/location
usage vs. time of day
...

Any Open Source Pregel like framework for distributed processing of large Graphs?

Google has described a novel framework for distributed processing on Massive Graphs.
http://portal.acm.org/citation.cfm?id=1582716.1582723
I wanted to know if similar to Hadoop (Map-Reduce) are there any open source implementations of this framework?
I am actually in process of writing a Pseudo distributed one using python and multiprocessing module and thus wanted to know if someone else has also tried implementing it.
Since public information about this framework is extremely scarce. (A link above and a blog post at Google Research)
Apache Giraph http://giraph.apache.org
Phoebus https://github.com/xslogic/phoebus
Bagel https://github.com/mesos/spark/pull/48
Hama http://hama.apache.org/
Signal-Collect http://code.google.com/p/signal-collect/
HipG http://www.cs.vu.nl/~ekr/hipg/
The main Hadoop project for distributed graph processing is the Hama project. Its still in incubation though.
The project has broken its work into two areas; a matrix package and a graph package.
Update:
A better option would be the Apache Giraph project which is based on Google Pregel.
Yes, a new project called Golden Orb, which is an open-source Pregel implementation written in Java that runs on both HBASE and Cassandra.
It has been submitted to Apache incubator for approval, and Ravel, the company behind Golden Orb, said they are releasing it this month (http://www.raveldata.com/goldenorb/).
See http://www.quora.com/Graph-Databases/What-open-source-graph-databases-support-horizontal-scaling
UPDATE: GraphX is GraphLab2 on Spark implemented by Joey Gonzalez, the creator of GraphLab2.
Spark's unique primitives make GraphX-Pregel the fastest JVM-based Pregel implementation. Spark is written in Scala, but Spark has a Java and Python API.
See...
GraphX: A Resilient Distributed Graph System on Spark (PDF)
Introduction to GraphX, by Joseph Gonzalez, Reynold Xin - UC Berkeley AmpLab 2013 (YouTube)
My Hacker News comment/overview on Spark.
P.S. There is also Bagel, which was the first cut at Pregel on Spark. It works; however, GraphX will be the way forward.
Two projects from Carnegie Mellon University provide Pregel-style computation
on graphs:
GraphLab http://graphlab.org
GraphChi http://graphchi.org
The programming model is not exactly same as Pregel, as they are not based on messaging but on modifying the graph (edge, vertex) data directly. Basically, it is easy to emulate Pregel in these framework.
There is also Signal/Collect a framework written in Scala and now using Akka
http://code.google.com/p/signal-collect/
https://github.com/uzh/signal-collect
From their website:
In Signal/Collect an algorithm is written from the perspective of vertices and edges. Once a graph has been specified the edges will signal and the vertices will collect. When an edge signals it computes a message based on the state of its source vertex. This message is then sent along the edge to the target vertex of the edge. When a vertex collects it uses the received messages to update its state. These operations happen in parallel all over the graph until all messages have been collected and all vertex states have converged.
Many algorithms have very simple and elegant implementations in Signal/Collect. You find more information about the programming model and features in the project wiki. Please take the time to explore some of the example algorithms below.
I create a framework called Phoebus. It is an implementation of Pregel written in Erlang. Checkout my blog entry for applying Pregel model to path finding as well..
Apache Giraph is currently in Incubator and under very active development, with committers from LinkedIn, Twitter, Facebook and academia looking to bring it up to production scale very quickly. It is pretty directly modeled on Pregel and was originally developed at Yahoo! Research. We're looking for new contributors and have several introductory JIRA issues to help people get started with the project. We'd love to have you get involved.
Stanford Students have developed an open Source implementation of Pregel.
http://infolab.stanford.edu/gps/