Retrieving All Payments From Sage Line 50 Database - sage-line-50

I'm trying to retrieve a list of all payments received/entered into a Sage Line 50 database. The company I work for are members of the Sage developer program, so I have access to the SDK, help files and the like but I have been unable to find any specific information regarding payments.
Some of the .dta files contain references to payments (SPLITS.DTA & HEADER.DTA) alongside Invoice rows.
Does anyone know whether or not there is a separate file which contains only payment information, and if so what is it? Alternatively, will I have to pull the full list of rows from the SPLITS/HEADER files and filter them by type?
Many thanks in advance

I pulled data from the Header and Split files for a test customer this afternoon, and they contain (as near as I can tell) all customer activity - Invoices, Invoice payments and Credits are all reflected in both data files (the split data file containing more in depth data) and can be filtered by bank_code and transaction type.
To get the data - first create a reference to a customer object and from there link to all of the header (assuming you have an existing connection and workspace).
dynamic workspace = this._workspaces[workspaceName];
dynamic customer = workspace.CreateObject("SalesRecord");
bool added = customer.AddNew();
customer.MoveFirst(); //find first customer
dynamic headerObject = customer.Link;
bool headerFound = headerObject.MoveFirst(); //use .MoveNext() to cycle headers
You can then pull data from the header object using :
string AccountRef = headerObject.Fields.Item("ACCOUNT_REF").Value;
Where ACCOUNT_REF is a field in the HeaderData object.
Use the following code to get split data
dynamic splitObject = headerObject.Link;
bool splitFound = splitObject.MoveFirst() //and so on

Related

JMeter : How to read particular row data in csv file based on a column value?

I am new to Jmeter and doing a POC to do a load test on a web application.
What I am trying to do:
I have a total of 4 user logins(surgeons). Each Login is associated with 'n' number of patients.
I've created 2 CSV files
one with the user login and password for surgeons
another CSV file that contains the PatientName, PatientID and the Surgeon associated with that Patient like below.
PatientName,PatientId,loginName
Pa1,PID1,user1
Pa2,PID2,user1
Pa3,PID3,user1
Pa4,PID4,user1
Pa5,PID5,user2
Pa6,PID6,user2
Pa7,PID7,user3
Pa8,PID8,user4
My Scenario:
Login as User.
Navigate to Each Patient Dashboard as per their associations.
log out of the application.
My Testplan
Thread Group (4 users, Ramp up time as 1 sec, 1 loop) -csv1(with username, password )
-Login Page and Navigate to the Main page - RunTime Controller (To sustain the load of a set amount of time)
-- While Loop(to loop between the patient dashboard of the surgeon/user logged in) ---CSV2 (the data as shown above) ----Navigate to Dashboard
----Navigate to Main
-Log out of the Application
What I want to achieve:
I want to use the single thread group and run it concurrently for all the 4 users. In this process, once the user login, the user should only those patient data from the CSV which are associated.
For Ex: When the Thread1 is running with User1 login, he should only able to loop through Pa1, Pa2, Pa3, Pa4 users When the thread2 is running with User2 login, user should only read the Pa5, Pa6 data.
Like this, each user login should only pick those users as per their associations mentioned above.
Is there any way, I can use this single CSV2 file and achieve this task? so that I don't have to create n number of the thread of n numbers of logins with n number CSV files each containing the data specific to the user login.
I did try to use the _CSVread function but that will make me to create multiple files(I currently have 500 CSV files) which is not a great idea. Expecting to find a solution to have all the data in one CSV and read it based on the Column value.
Reading data from CSV file based on particular column value is not supported in JMeter, you can consider the following options:
Create separate CSV files for each surgeon and pick up the relevant file based on currently logged surgeon id/name/whatever using __CSVRead() function.
Use If Controller to choose to this or that execution branch based on the surgeon name
Use Switch Controller to choose this or that execution branch based on the surgeon name

SSIS consolidate and concatenate multiple rows into single rows without using SQL

I am trying to accomplish something that is pretty easy to do in SQL, but seemingly very challenging to do in SSIS without using SQL. Basically, I need to consolidate and concatenate a field of a many-to-one relationship.
Given entities: [Contract Item] (many) to (one) [Account]
There is a field [ari_productsummary] that contains the product listed on the Contract Item entity. We want to write that value to the Account as [ari_activecontractitems]. However, an Account may have more than one Contract Item record associated to it, in which case, we want to concatenate those values. We also only want the distinct values to be concatenated (distinct rows already solved within my data flow).
This can be accomplished by writing to a temporary table, and then using a query or view to obtain the summarized results as followed. I created a SQL table called TESTTABLE that contains the [ari_productsummary] from the Contract Item entity along with the referring [accountid] to map it back to Account. I then wrote the following query as a view:
SELECT distinct accountid,
(SELECT TT2.ari_productsummary + '; '
FROM TESTTABLE TT2
WHERE TT2.accountid = TT.accountid
FOR XML PATH ('')
) AS 'ari_activecontractitems'
FROM TESTTABLE TT
Executing that Query provides me the results that I want, which I can then use for importing into the Account entity as shown below:
But how do I do this in a SSIS dataflow without writing to a SQL table as a temporary placeholder for the data?? I want to do the entire process inside one dataflow container, without using a temporary SQL table/view. The whole summarization process needs to be done on the fly:
Does anyone have a solution that doesn't require a temporary SQL table/view/query, but is contained entirely within the data flow?
I am using VS 2017 and the KingswaySoft Dynamic CRM 365 ETL toolset to develop my solution/package.
Spit balling here as I don't Dynamics nor do I have the custom components.
Data Flow 1 - Contract aggregation
The purpose of this data flow is to replicate your logic in the elegant query you provided and shove that into a Cache Connection Manager (see Notes for 2008+ at the end)
KingswaySoft Dynamics Source -> Script Task -> Cache Transform
If you want to keep the sort in there, do it before the script task. The implementation I'll take with the Script Task is that it's fully blocking - that is all the rows must arrive before it can send any on. Tasks like the Merge Join are only partially blocking because the requirement of sorted data means that once you no longer have a match for the current item, you can send it on down the pipeline.
The Script Task is going to be asynchronous transformation. You'll have two output columns, your key accountid and your new derived column of ari_activecontractitems. That column will might need to be big - you'll know your data best but if it's a blob type in Dynamics (> 4k unicode or > 8k ascii characters) then you'll have to define the data type as DT_TEXT/DT_NTEXT
As inputs, you'll select accountid and ari_productsummary from your source.
The code should be pretty easy. We're going to accumulate the inbound data into a Dictionary.
// member variable
Dictionary<string, List<string>> accumulator;
The PreProcess method, we'll tack this in there to initialize our variable
// initialize in PreProcess method
accumulator = new Dictionary<string, List<string>>();
In the OnBufferRowSent (name approx)
// simulate the inbound queue
// row_id would be something like Rows.row_id
if (!accumulator.ContainsKey(row_id))
{
// Create an empty dictionary for our list
accumulator.Add(row_id, new List<string>());
}
// add it if we don't have it
if (!accumulator[row_id].Contains(invoice))
{
accumulator[row_id].Add(invoice);
}
Once you get the signal sent of no more data available, that's when you start buffering output data. The auto generated code will have placeholders for all this.
// This is how we shove data out the pipe
foreach(var kvp in accumulator)
{
// approximately thus
OutputBuffer1.AddRow();
OutputBuffer1.row_id = kvp.Key;
OutputBuffer1.ari_productsummary = string.Join("; ", kvp.Value);
}
We have an upcoming release that comes with a component that does exactly what you are trying to achieve without the need of writing custom code. The feature is currently under preview, please reach out to us for private access to the feature. You can find our contact information on our website.
UPDATE - June 5, 2020, we have made the components available for public access at https://www.kingswaysoft.com/products/ssis-productivity-pack/ as a result of our 2020 Release Wave 1. We have two components available that serve this kind of purpose. The Composition component will take input values and transform into a composite value in a SSIS column. The Decomposition component does the opposite, it would take an input value and split it into multiple rows using either delimiter-based text splitting or XML/JSON array splitting.

Flask SQL-Alchemy query is returning null for data that exists in my database. What could be the cause

My python program is meant to query my MySQL database for a record. The record exists in the database and contains data but the program returns null values. The table that gets queried is titled Market. In that table there is a column titled market_cap and a column titled volume. When I use MySQLWorkbench to query my database, the result shows that there is data in the columns. However, the program receives null.
Attached are two images (links, because I need to earn 10 reputation points to embed images in a post):
MySql database column image
shows a view of the column in my database that is having issues.
From the image, you can see that the data I need exists in my database.
Code with results from Pycharm debugger
Before running the debugger, I set a breakpoint right after the line where
the code queries the database for an object. Image two shows the output I
received when the code queried the database.
Screenshot of the Market Model
Screenshot of the solution I found out that converting the market cap(market_cap) before adding it to the dictionary(price_map) returns the correct value. You can see it in line 138.
What could cause existent data in a record to be returned as null?
import logging
from flask_restful import Resource
from api.resources.util.date_util import pretty_date_to_epoch,
epoch_to_pretty_date
from common.decorators import log_exception
from db.models import db, Market
log = logging.getLogger(__name__)
def map_date_to_price():
buy_date_list = ["2015-01-01", "2015-02-01", "2015-03-01"]
sell_date_list = ["2014-12-19", "2014-01-10", "2015-01-20",
"2016-01-10"]
date_list = buy_date_list + sell_date_list
market_list = []
price_map = {}
for day in date_list:
market_list.append(db.session.query(Market).
filter(Market.pretty_date == day).first())
for market in market_list:
price_map[market.pretty_date] = market.market_cap
return price_map
The two fields that are (apparently) being retrieved as null are both db.Numeric. http://docs.sqlalchemy.org/en/latest/core/type_basics.html notes that these are, by default, backed up by a decimal.Decimal object, which I'll bet can't be converted to JSON, so what comes back form Market.__repr__() will show them as null.
I would try adding asdecimal=False to the two db.Numeric() calls in Market.

Mass Upload Files To Specific Contacts Salesforce

I need to upload some 2000 documents to specific users in salesforce. I have a csv file that has the Salesforce-assigned ContactID, as well as a direct path to the files on my desktop. Each contact's specific file url has been included in the csv. How can I upload them all at one and, especially, to the correct contact?
You indicated in the comments / chat that you want it as "Files".
The "Files" object is bit more complex than Attachments, you'll need to do it in 2-3 steps. What you see as a File (you might see it referred to in documentation as Chatter Files or Salesforce Content) is actually several tables. There's
ContentDocument which can be kind of a file header (title, description, language, tags, linkage to many other areas in SF - because it can be standalone, it can be uploaded to certain SF Content Library, it can be linked to Accounts, Contacts, $_GOD knows what else)
ContentVersion which is well, actual payload. Only most recent version is displayed out of the box but if you really want you can go back in time
and more
The crap part is that you can't insert ContentDocument directly (there's no create() call in the list of operations) .
Theory
So you'll need:
Insert ContentVersion (v1 will automatically create for you parent ContentDocuments... it does sound bit ass-backwards but it works). After this is done you'll have a bunch of standalone documents loaded but not linked to any Contacts
Learn the Ids of their parent ContentDocuments
Insert ContentDocumentLink records that will connect Contacts and their PDFs
Practice
This is my C:\stacktest folder. It contains some SF cheat sheet PDFs.
Here's my file for part 1 of the load
Title PathOnClient VersionData
"Lightning Components CheatSheet" "C:\stacktest\SF_LightningComponents_cheatsheet_web.pdf" "C:\stacktest\SF_LightningComponents_cheatsheet_web.pdf"
"Process Automation CheatSheet" "C:\stacktest\SF_Process_Automation_cheatsheet_web.pdf" "C:\stacktest\SF_Process_Automation_cheatsheet_web.pdf"
"Admin CheatSheet" "C:\stacktest\SF_S1-Admin_cheatsheet_web.pdf" "C:\stacktest\SF_S1-Admin_cheatsheet_web.pdf"
"S1 CheatSheet" "C:\stacktest\SF_S1-Developer_cheatsheet_web.pdf" "C:\stacktest\SF_S1-Developer_cheatsheet_web.pdf"
Fire Data Loader, select Insert, select showing all Salesforce objects. Find ContentVersion. Load should be straightforward (if you're hitting memory issues set batch size to something low, even 1 record at a time if really needed).
You'll get back a "success file", it's useless. We don't need the Ids of generated content versions, we need their parents... Fire "Export" in Data Loader, pick all objects again, pick ContentDocument. Use query similar to this:
Select Id, Title, FileType, FileExtension
FROM ContentDocument
WHERE CreatedDate = TODAY AND CreatedBy.FirstName = 'Ethan'
You should see something like this:
"ID","TITLE","FILETYPE","FILEEXTENSION"
"0690g0000048G2MAAU","Lightning Components CheatSheet","PDF","pdf"
"0690g0000048G2NAAU","Process Automation CheatSheet","PDF","pdf"
"0690g0000048G2OAAU","Admin CheatSheet","PDF","pdf"
"0690g0000048G2PAAU","S1 CheatSheet","PDF","pdf"
Use Excel and magic of VLOOKUP or other things like that to link them back by title to Contacts. You wrote you already have a file with Contact Ids and titles so there's hope... Create a file like that:
ContentDocumentId LinkedEntityId ShareType Visibility
0690g0000048G2MAAU 0037000000TWREI V InternalUsers
0690g0000048G2NAAU 0030g000027rQ3z V InternalUsers
0690g0000048G2OAAU 0030g000027rQ3a V InternalUsers
0690g0000048G2PAAU 0030g000027rPz4 V InternalUsers
1st column is the file Id, then contact Id, then some black magic you can read about & change if needed in ContentDocumentLink docs.
Load it as insert to (again, show all objects) ContentDocumentLink.
Woohoo! Beer time.
Your CSV should contain following fields :
- ParentID = Id of object you want to link the attachment to (the ID of the contact)
- Name = name of the file
- ContentType = extension(.xls or .pdf or ...)
- OwnerId = if empty I believe it takes your user as owner
- body = the location on your machine of the file (for instance: C:\SFDC\Files\test.pdf
Use this csv to insert the records (via data loader) into the Attachment object.
You will then see for each contact, that records have been added to the 'Notes & Attachments' related list.

SSIS - Process a flat file with varying data

I have to process a flat file whose syntax is as follows, one record per line.
<header>|<datagroup_1>|...|<datagroup_n>|[CR][LF]
The header has a fixed-length field format that never changes (ID, timestamp etc). However, there are different types of data groups and, even though fixed-length, the number of their fields vary depending on the data group type. The three first numbers of a data group define its type. The number of data groups in each record varies also.
My idea is to have a staging table with to which I would insert all the data groups. So two records like this,
12320160101|12323456KKSD3467|456SSGFED43520160101173802|
98720160102|456GGLWSD45960160108854802|
Would produce three records in the staging table.
ID Timestamp Data
123 01/01/2016 12323456KKSD3467
123 01/01/2016 456SSGFED43520160101173802
987 02/01/2016 456GGLWSD45960160108854802
This would allow me to preprocess the staged records for further processing (some would be discarded, some have their data broken down further). My question is how to break down the flat file into the staging table. I can split the entire record with pipe (|) and then use a Derived Column Transformation to break down the header with SUBSTRING. After that it gets trickier because of the varying number of data groups.
The solution I came up with myself doesn't try to split at the flat file source, but rather in a script. My Data Flow looks like this.
So the Flat File Source output is just a single column containing the entire line. The Script Component contains output columns for each column in the Staging table. The script looks like this.
public override void Input0_ProcessInputRow(Input0Buffer Row)
{
var splits = Row.Line.Split('|');
for (int i = 1; i < splits.Length; i++)
{
Output0Buffer.AddRow();
Output0Buffer.ID = splits[0].Substring(0, 11);
Output0Buffer.Time = DateTime.ParseExact(splits[0].Substring(14, 14), "yyyyMMddHHmmssFFF", CultureInfo.InvariantCulture);
Output0Buffer.Datagroup = splits[i];
}
}
Note that in the SynchronousInputID property (Script Transformation Editor > Input and Outputs > Output0) must be set to None. Otherwise you won't have Output0Buffer available in your script. Finally the OLE DB Destination just maps the script output columns to the Staging table columns. This solves the problem I had with creating multiple output Records from a single input record.