The following error is received while publishing a batch class in kofax batch manager.
"The advanced batch class property 'Process documents as independent batches' is selected. This option requires that the separation method occur in the Scan module."
I would want the PDF documents being scanned as individual documents than every page as a individual document. But selection of "Process documents as independent batches" leads to the above error
Each batch in Kofax Capture follows the following hierarchy (note that there are folders which could add additional levels of hierarchy):
Batch
Document
Page
A single batch usually has 1..n documents, and each document has 1..n pages. Here's an example for a batch with two documents:
It is entirely possible for a batch to have no documents, but pages (called loose pages):
However, you would not be able to export such a batch or process it with most modules.
The advanced batch class property 'Process documents as independent
batches' is selected. This option requires that the separation method
occur in the Scan module.
This feature will split documents from a single batch into multiple batches. Example given - for my first screenshot: assuming separation was performed during scan, this would create two individual batches, one document each.
However, the key here is document separation, and I think you might be confusing the batch splitting feature with document separation. Document separation makes sure that document are created automatically by Kofax. Imagine you are processing order forms, and each form may have multiple pages. However, there is a barcode on the first page - Kofax can make use of said barcode in order to create documents. There are many different separation methods, and each of them is described in greater detail in the Online Help.
I would want the PDF documents being scanned as individual documents
I do assume however that you are importing PDF documents in the Scan module manually. There is no way to separate imported (electronic) documents automatically unless they are converted to TIFF (where you could make use of barcodes or other separation methods), so you would need to manually create documents by right-clicking on the imported PDF and then selecting the appropriate entry in the context menu.
When importing electronic documents into Kofax Capture is your requirement, Kofax Import Connector (KIC) is the right tool to use. KIC can be configured to create documents automatically - for file-based import, that's usually one document per imported file.
Related
I have a system that produces CSV data on a regular event-driven basis (say, daily). Each event triggers the creation of a new folder and a fixed set of CSV files, each representing different types of data. For instance:
PlansDB.csv - data for plans of action
StepsDB.csv - descriptions of steps used by different plans
GroupsDB.csv - data on groups that can handle plans
RoomsDB.csv - data on places where a group can work on a plan
ResultsDB.csv - the records of results from steps of a plan
These have fields that identify the relationships between the different files, and I have no problems creating a data model for the CSVs in any given folder.
But how do I switch between folders? Once I have a working data model and some reports built off it, I'd like to view those reports on specific folders of data. How does that work? Can I switch easily to yesterday's folder, or last weeks, etc. with minimal effort (preferably just pointing to the folder).
The CSV files maintain the same names across folders which represent the types of the data they store. Can Power BI use that?
And can I run reports over multiple folders maintaining this data model? I know of the Folder merge capability, but my attempts at using it just merges all files as if they were the same type, whereas I would need each type merged separately.
You need to change the data source. To do this, from "Edit Queries" select "Data source settings":
Then click "Change Source..." button and select the new folder. After that Power BI Desktop will tell you to apply the changes and will reload the data from the new folder:
I've been using Dax to help me Document my Power BI file. Using Dax queries I've been able to record all the fields that exist in the file, including calculated and measured fields. In my documentation process I am also looking to find a way to record visualizations on the report - namely the charts and graphs. Unfortunately, no Dax query I've read about provides a list of data such as the visualization title, what fields it's using, or what kind of graph it is. Is there any Dax query that provides this information, as a whole or any part of it?
In addition to attempting to document with Dax I have also looked at the raw XML data in the Power BI file (For those who may not know, you can rename your Power BI file from .pbix to .zip and view the raw data). The relevant files within PBI are either XML or JSON. Looking at ../Report/Layout.JSON specifically I have seen JSON-formatted text that includes visualization data. Is there any easy way to extract this data and format it in a more-readable fashion?
For clarity, I do not need the contents of the tables, but I would like a way to record what fields are being used in the visualization, rather than what fields merely exist.
EDIT: I've found a workaround. It isn't efficient, and I would still appreciate any knowledge on this subject
I mentioned going through the the Layout file, renaming it to .JSON and poking it in Notepad++. I've found that you can ctrl+f for "displayName", "queryRef" and ""title\":show\":true,\"text\":\"". Break these all to new lines and indent them with tab (Use ctrl+h and replace with \n\t in notepad). These indent the JSON-formatted lines for Power BI pages, fields called by visualizations, and the visualization titles (if they have any), respectively.
Save this document as .csv and load it into Excel by delimiting on tabs. Use your preferred process - I prefer query editor - editor to remove the other non-indented rows. There still may be a lot of excess characters on the indented lines which need to be removed manually. At the end of this process, though, I ended with 3 columns in excel listing the aforementioned fields I've been looking for.
On a PBIX file with more than a dozen pages and several hundred dependent fields this process took about three hours. If there are any faster ways to do this, I would love to hear about them
As you have noted, DAX doesn't help you in this case because it will tell you about the model rather than the visuals on the report pages. The Layout file works, but you have to parse it for the information you need. You could probably just pull that JSON file into Power BI and process it there to get the info you want. There are also third party tools that can help with this. I just looked at https://app.datavizioner.com/ and it lists the ID of the visual, the type of visual, and each field used in the visual. It is currently free and just requires you to upload a PBIT of your report. It doesn't have the title of the visual that we see, so you would have to find a way to map the IDs you see to the human-friendly title of the visuals if you need that.
See http://radacad.com/power-bi-helper. It can tell you tables and columns in use. It also can export a list of all tables, columns, formulas, and roles in your model.
If you want stuff on the visualizations and how they are configured, Layout.json is the only way I know. The file does open nicely in Power Query if you were so inclined to try to make something of it.
My new Power BI comparer tool documents the whole Power BI file (pbit). The "CompareVisuals"-tab should provide you with all the information necessary.
It also superfast: Just fill in the path to the pbits (you can fill in the same path into both fields, if you don't want to compare, but just to analyze one file).
https://www.thebiccountant.com/2019/09/14/compare-power-bi-files-with-power-bi-comparer-tool/
So I have this fileMakerPro7 database. As my senior project, I supposed to migrate the database to a MySQL database and than give it a PHP Based interface in 3N form...
Company allow us $200 tops to spend on the project, but if I pay for something, it has to work. However, I am having trouble finding a way of migrating the database. Any suggestions?
I have found "file maker pro migrator" (http://www.fmpromigrator.com), would the trial version be enough for us? In worst case, we will start from the beginning with throwing away the whole database that company has.
I can also download fileMakerPro12 and use it for a month with trial version for free. Would I be able to convert the db by using FMP12?
I am totally lost...open to any free suggestions...
+this is a non-profit-making company I'm doing the project for
If I had to do it, I'd look at the design of the FileMaker db and create something similar in mysql. Then I would export the Filemaker data to text and import it somehow. The details depend on foreign key values and such.
The PHP interface would be done separately.
MySQL Data Conversion:
Yes, if your database is small enough, the demo version of FmPro Migrator will convert the database and also build you a PHP web application - at no cost.
Here are the limitations of the demo version:
5 fields
5 scripts
5 layouts
PHP Web Application:
Most people don't realize it, but there is a wealth of FileMaker metadata available in XML format for performing these types of conversions. This XML info is available either thru copying the layout via the clipboard or reading it from the Database Design Report XML file. I have found the clipboard data to be the most reliable source of this info.
FmPro Migrator is able to parse in the XML and convert it into the PHP web application.
Each object on a layout is represented in XML, along with style and position info. This info can be used to create form files representing the same look as the original layout. In fact, it can be difficult to see the difference between the web application and the original database if you get all of the object properties implemented. This can be helpful for situations in which companies don't want to have to retrain their employees. They want the web application to look and work the same as the original desktop application.
I have done a few of these conversions recently into the CakePHP framework. Here a few techniques I used:
Auto-Enter Calculation Fields - Stored calculation fields are calculated and stored within the model saves a record to the database.
Unstored Calculation Fields - Unstored Calculation fields are calculated in real-time within the form controller - but only for fields actually displayed on the form. This prevents unnecessarily calculating these values if they aren't being used on a form, improving performance.
Global Fields - A Global field in FileMaker is used like a global variable in programming languages. It is important to know that each FileMaker user gets there own private copy of global field data. There is no equivalent feature MySQL or other SQL database servers, but this functionality can easily be simulated using SESSION variables. Therefore each web user will still get their own private SESSION data, simulating the same functionality originally present in the FileMaker database. I structure these globals in the model data array as if they were retrieved from the model, meaning that converted scripts and fields on forms can reference them easily. Just before the record gets written into the database, the results are saved into SESSION variables for persistence.
Global Variables in Scripts - Global variables within FileMaker scripts match up very well with the use of PHP SESSION variables, if you want to implement the same functionality.
Vector Graphic Objects - FileMaker layouts frequently include rectangles, ovals and line objects. These objects can be replaced with the RafaelJS library, providing high quality resolution independent graphics.
Value Lists - Custom and Field based value lists are implemented in a centralized location within the AppController.php file. Therefore making a change to the definition of the value list within the AppController, succeeds in changing the menu automatically throughout the whole application.
Is it possible to perform any sort of indirection in SSIS?
I have a series of jobs performing FTP and loops through the files before trying to run another DTSX package on them. Currently this incurs a lot of repeated cruft to pull down the file and logging.
Is there any way of redesigning this so I only need one package rather than 6?
Based on your comment:
Effectively the 6 packages are really 2 x 3. 1st for each "group" is FTP pull
down and XML parsing to place into flat tables. Then 2nd then transforms and
loads that data.
Instead of downloading files using one package and inserting data into tables using another package, you can do that in a single package.
Here is a link containing an example which downloads files from FTP and saves it to local disk.
Here is a link containing an example to loop through CSV files in a given folder and inserts that data into database.
Since you are using XML files, here is a link that shows how to loop through XML files.
You can effectively combine the above examples into a single package by placing the control flow tasks one after the other.
Let me know if this is not what you are looking for.
I have a ETL type requirement for SQL Server 2005. I am new to SSIS but I believe that it will be the right tool for the job.
The project is related to a loyalty card reward system. Each month partners in the scheme send one or more XML files detailing the qualiifying transactions from the previous month. Each XML file can contain up to 10,000 records. The format of the XML is very simple, 4 "header" elements, then a repeating sequence containing the record elements. The key record elements are card_number, partner_id and points_awarded.
The process is currently running in production but it was developed as a c# app which runs an insert for each record individually. It is very slow, taking over 8 hours to process a 10,000 record file. Through using SSIS I am hoping to improve performance and maintainability.
What I need to do:
Collect the file
Validate against XSD
Business Rule Validation on the records. For each record I need to ensure that a valid partner_id and card_number have been supplied. To do this I need to execute a lookup against the partner and card tables. Any "bad" records should be stripped out and written to a response XML file. This is the same format as the request XML, with the addition of an error_code element. The "good" records need to be imported into a single table.
I have points 1 and 2 working ok. I have also created an XSLT to transform the XML into a flat format ready for insert. For point 3 I had started down the road of using a ForEach Loop Container control in the control flow surface, to loop each XML node, and the SQL Lookup task. However, this would require a call to the database for each lookup and a call to the file system to write out the XML files for the "bad" and "good" records.
I believe that better performance could be achieved by using the Lookup control on the data flow surface. Unfortunately, I have no experience of working with the data flow surface.
Does anyone have a suggestion as to the best way to solve the problem? I searched the web for examples of SSIS packages that do something similar to what I need but found none - are there any out there?
Thanks
Rob.
SSIS is frequently used to load data warehouses, so your requirement is nothing new. Take a look at this question/answer, to get you started with tutorials etc.
For-each in control flow is used to loop through files in directory, tables in a db etc. Data flow is where records fly through transformations from a source (your xml file) to a destination (tables).
You do need a lookup in one of its many flavors. Google for "ssis loading data warehouse dimensions"; this will eventually show you several techniques of efficiently using lookup transformation.
To flatten the XML (if simple enough), I would simply use XML source in data flow, XML task is for heavier stuff.