Define custom POST method for MyDAC - mysql

I have three tables objects, (primary key object_ID) flags (primary key flag_ID) and object_flags (cross-tabel between objects and flags with some extra info).
I have a query returning all flags, and a one or zero if a given object has a certain flag:
SELECT
f.*,
of.*,
of.objectID IS NOT NULL AS object_has_flag,
FROM
flags f
LEFT JOIN object_flags of
ON (f.flag_ID = of.flag_ID) AND (of.object_ID = :objectID);
In the application (which is written in Delphi), all rows are loaded in a component. The user can assign flags by clicking check boxes in a table, modifying the data.
Suppose one line is edited. Depending on the value of object_has_flag, the following things have to be done:
If object_has_flag was true and still is true, an UPDATE should be done on the relevant row in objects_flags.
If object_has_flag was false but is now true, and INSERT should be done
If object_has_flag was true, but is now false, the row should be deleted
It seems that this cannot be done in one query https://stackoverflow.com/questions/7927114/conditional-replace-or-delete-in-one-query.
I'm using MyDAC's TMyQuery as a dataset. I have written separate code that executes the necessary queries to save changes to a row, but how do I couple this to the dataset? What event handler should I use, and how do I tell the TMyQuery that it should refresh instead of post?
EDIT: apparently, it is not completely clear what the problem is. The standard UpdateSQL, DeleteSQL and InsertSQL cannot be used because sometimes after editing a line (not deleting it or inserting a line), an INSERT or DELETE has to be done.

The short answer is, to paraphrase your answer here:
Look up the documentation for "Updating Data with MyDAC Dataset Components" (as of MyDAC 5.80).
Every TCustomDADataSet (such as TMyQuery) descendant has the capability to set update SQL statements using SQLInsert, SQLUpdate and SQLDelete properties.
TMyUpdateSQL is also a promising component for custom update operations.

It seems that the easiest way is to use the BeforePost event, and determine what has to be done using the OldValue and NewValue properties of several fields.

Related

SSIS consolidate and concatenate multiple rows into single rows without using SQL

I am trying to accomplish something that is pretty easy to do in SQL, but seemingly very challenging to do in SSIS without using SQL. Basically, I need to consolidate and concatenate a field of a many-to-one relationship.
Given entities: [Contract Item] (many) to (one) [Account]
There is a field [ari_productsummary] that contains the product listed on the Contract Item entity. We want to write that value to the Account as [ari_activecontractitems]. However, an Account may have more than one Contract Item record associated to it, in which case, we want to concatenate those values. We also only want the distinct values to be concatenated (distinct rows already solved within my data flow).
This can be accomplished by writing to a temporary table, and then using a query or view to obtain the summarized results as followed. I created a SQL table called TESTTABLE that contains the [ari_productsummary] from the Contract Item entity along with the referring [accountid] to map it back to Account. I then wrote the following query as a view:
SELECT distinct accountid,
(SELECT TT2.ari_productsummary + '; '
FROM TESTTABLE TT2
WHERE TT2.accountid = TT.accountid
FOR XML PATH ('')
) AS 'ari_activecontractitems'
FROM TESTTABLE TT
Executing that Query provides me the results that I want, which I can then use for importing into the Account entity as shown below:
But how do I do this in a SSIS dataflow without writing to a SQL table as a temporary placeholder for the data?? I want to do the entire process inside one dataflow container, without using a temporary SQL table/view. The whole summarization process needs to be done on the fly:
Does anyone have a solution that doesn't require a temporary SQL table/view/query, but is contained entirely within the data flow?
I am using VS 2017 and the KingswaySoft Dynamic CRM 365 ETL toolset to develop my solution/package.
Spit balling here as I don't Dynamics nor do I have the custom components.
Data Flow 1 - Contract aggregation
The purpose of this data flow is to replicate your logic in the elegant query you provided and shove that into a Cache Connection Manager (see Notes for 2008+ at the end)
KingswaySoft Dynamics Source -> Script Task -> Cache Transform
If you want to keep the sort in there, do it before the script task. The implementation I'll take with the Script Task is that it's fully blocking - that is all the rows must arrive before it can send any on. Tasks like the Merge Join are only partially blocking because the requirement of sorted data means that once you no longer have a match for the current item, you can send it on down the pipeline.
The Script Task is going to be asynchronous transformation. You'll have two output columns, your key accountid and your new derived column of ari_activecontractitems. That column will might need to be big - you'll know your data best but if it's a blob type in Dynamics (> 4k unicode or > 8k ascii characters) then you'll have to define the data type as DT_TEXT/DT_NTEXT
As inputs, you'll select accountid and ari_productsummary from your source.
The code should be pretty easy. We're going to accumulate the inbound data into a Dictionary.
// member variable
Dictionary<string, List<string>> accumulator;
The PreProcess method, we'll tack this in there to initialize our variable
// initialize in PreProcess method
accumulator = new Dictionary<string, List<string>>();
In the OnBufferRowSent (name approx)
// simulate the inbound queue
// row_id would be something like Rows.row_id
if (!accumulator.ContainsKey(row_id))
{
// Create an empty dictionary for our list
accumulator.Add(row_id, new List<string>());
}
// add it if we don't have it
if (!accumulator[row_id].Contains(invoice))
{
accumulator[row_id].Add(invoice);
}
Once you get the signal sent of no more data available, that's when you start buffering output data. The auto generated code will have placeholders for all this.
// This is how we shove data out the pipe
foreach(var kvp in accumulator)
{
// approximately thus
OutputBuffer1.AddRow();
OutputBuffer1.row_id = kvp.Key;
OutputBuffer1.ari_productsummary = string.Join("; ", kvp.Value);
}
We have an upcoming release that comes with a component that does exactly what you are trying to achieve without the need of writing custom code. The feature is currently under preview, please reach out to us for private access to the feature. You can find our contact information on our website.
UPDATE - June 5, 2020, we have made the components available for public access at https://www.kingswaysoft.com/products/ssis-productivity-pack/ as a result of our 2020 Release Wave 1. We have two components available that serve this kind of purpose. The Composition component will take input values and transform into a composite value in a SSIS column. The Decomposition component does the opposite, it would take an input value and split it into multiple rows using either delimiter-based text splitting or XML/JSON array splitting.

How to update multiple records in same table using .AfterUpdate data macro without error "A data macro resource limit was hit."

I have a table tblItems with a list of inventory items. The table has many columns to describe these items, including columns for SupplierName, SupplierOrderNumber and PredictedArrivalDate.
If I order several new items from a supplier, I will record each item separately in the table with the same supplier name, order number and a predicted arrival date.
I would like to add a data macro, so that if I update the PredictedArrivalDate for one record, the value will be copied to the PredictedArrivalDate column of other records/items with the same SupplierName AND SupplierOrderNumber.
The closest I've got is:
SetLocalVar (MySupplierName, [SupplierName])
SetLocalVar (MySupplierOrderNumber , [SupplierOrderNumber ])
SetLocalVar (MyPredictedArrivalDate, [PredictedArrivalDate])
For Each Record in tblItems
Where Condition = [SupplierOrderNumber] Like [MySupplierOrderNumber] And [SupplierName] Like [MySupplierName] And [PredictedArrivalDate]<>[MyPredictedArrivalDate]
Alias OtherRecords
EditRecord
SetField ([OtherRecords].[PredictedArrivalDate], [MyPredictedArrivalDate])
End EditRecord
However, when I run this, only 5 records update, and the error log reports error -20341:
"A data macro resource limit was hit. This may be caused by a data
macro recursively calling itself. The Updated() function may be
used to detect which field in a record has been updated to help
prevent recursive calls."
How can I get this working?
I'm not one for using macro's to do anything, so I'd use VBA and recordsets/an action query to do the updating.
You can call a user-defined function inside a data macro by setting a local var equal to its result.
Access doesn't like data macros triggering themselves (which you are doing, you're using an on update macro and updating fields in the same table on a different record), because there is a risk of accidentally creating endless loops. Looks like you triggered a measure that's made to prevent this. I'd try to avoid that as much as possible.
Note: using user-defined functions inside data macros can cause problems when you're linking to the table from outside of Access (via ODBC for example).
This isn't a good solution (it's not a data macro), but it does work as a temporary fix.
I created an update query called "updatePredictedArrivalDate":
PARAMETERS
ItemID Long,
MyPredictedArrivalDate DateTime,
MySupplierName Text ( 255 ),
MySupplierOrderNumber Text ( 255 );
UPDATE tblItems
SET tblItems.PredictedArrivalDate = [MyPredictedArrivalDate]
WHERE (((tblItems.SupplierName) = [MySupplierName])
AND ((tblItems.SupplierOrderNumber) = [MySupplierOrderNumber])
AND ((tblItems.ID) <> [ItemID]));
On the PredictedArrivalDate form field .AfterUpdate event, I then added this macro:
IF [PredictedArrivalDate].[OldValue]<>[PredictedArrivalDate] Or [PredictedArrivalDate]<>""
OpenQuery (updatePredictedArrivalDate, Datasheet, Edit, [ID], [PredictedArrivalDate], [SupplierName], [SupplierOrderNumber])
I now have to remember to add this .AfterUpdate event to any other forms I create that amend that particular field.
If anyone has a better solution, please let me know.

Laravel Eloquent is not saving properties to database ( possibly mysql )

I'm having a strange issue.
I created a model observer for my user model. The model observer is being run at 'saving'. when I dump the object at the very end of the user model to be displayed ( this is just before it saves.. according to laravel docs ) it displays all the attributes set correctly for the object, I've even seen an error that showed the correct attributes as set and being inserted into my database table. However, after the save has been completed and I query the database, two of the fields are not saved into the table.
There is no code written by myself sitting between the point where I dumped the attributes to check that they had been set and the save operation to the database. so I have no idea what could be causing this to happen. All the names are set correctly, and like I said, the attributes show as being inserted into the database, they just never end up being saved, I receive no error messages and only two out of ten attributes aren't being saved.
In my searches I have found many posts detailing that the $fillable property should be set, or issues relating to a problem with variables being misnamed or unset, however because I already have the specific attributes not being saved specified in the $fillable array, on top of the fact that they print out exactly as expected pre save, I don't believe those issues are related to the problem I am experiencing.
to save I'm calling:
User::create(Input::all());
and then the observer that handles the data looks like this:
class UserObserver {
# a common key between the city and state tables, helps to identify correct city
$statefp = State::where('id',$user->state_id)->pluck('statefp');
# trailing zeros is a function that takes the first parameter and adds zeros to make sure
# that in this case for example, the dates will be two characters with a trailing zero,
# based on the number specified in the second parameter
$user->birth_date = $user->year.'-'.$user->trailingZeros( $user->month, 2 ).'-'.$user->trailingZeros( $user->day, 2 );
if(empty($user->city)){
$user->city_id = $user->defaultCity;
}
$user->city_id = City::where( 'statefp', $statefp )->where('name', ucfirst($user->city_id))->pluck('id');
# if the user input zip code is different then suggested zip code then find location data
# on the input zip code input by the user from the geocodes table
if( $user->zip !== $user->defaultZip ){
$latlon = Geocode::where('zip', $user->zip)->first();
$user->latitude = $latlon['latitude'];
$user->longitude = $latlon['longitude'];
}
unset($user->day);
unset($user->month);
unset($user->year);
unset($user->defaultZip);
unset($user->defaultCity);
}
that is the code for the two values that aren't being set, when I run
dd($user);
all the variables are set correctly, and show up in the mysql insert attempt screen with correct values, but they do not persist past that point.. it seems to me that possibly mysql is rejecting the values for the city_id and the birth_date. However, I cannot understand why, or whether it is a problem with Laravel or mysql.
since I was calling
User::create();
I figured I'd try to have my observer listen to:
creating();
I'm not sure why it only effected the date and city variables, but changing the function to listen at creating() instead of saving() seems to have solved my problem.

IndexedDB - boolean index

Is it possible to create an index on a Boolean type field?
Lets say the schema of the records I want to store is:
{
id:1,
name:"Kris",
_dirty:true
}
I created normal not unique index (onupgradeneeded):
...
store.createIndex("dirty","_dirty",{ unique: false })
...
The index is created, but it is empty! - In the index IndexedDB browser there are no records with Boolean values - only Strings, Numbers and Dates or even Arrays.
I am using Chrome 25 canary
I would like to find all records that have _dirty attribute set to true - do I have to modify _dirty to string or int then?
Yes, boolean is not a valid key.
If you must, of course you can resolve to 1 and 0.
But it is for good reason. Indexing boolean value is not informative. In your above case, you can do table scan and filter on-the-fly, rather than index query.
The answer marked as checked is not entirely correct.
You cannot create an index on a property that contains values of the Boolean JavaScript type. That part of the other answer is correct. If you have an object like var obj = {isActive: true};, trying to create an index on obj.isActive will not work and the browser will report an error message.
However, you can easily simulate the desired result. indexedDB does not insert properties that are not present in an object into an index. Therefore, you can define a property to represent true, and not define the property to represent false. When the property exists, the object will appear in the index. When the property does not exist, the object will not appear in the index.
Example
For example, suppose you have an object store of 'obj' objects. Suppose you want to create a boolean-like index on the isActive property of these objects.
Start by creating an index on the isActive property. In the onupgradeneeded callback function, use store.createIndex('isActive','isActive');
To represent 'true' for an object, simply use obj.isActive = 1;. Then add or put the object into the object store. When you want to query for all objects where isActive is set, you simply use db.transaction('store').index('isActive').openCursor();.
To represent false, simply use delete obj.isActive; and then add or or put the object into the object store.
When you query for all objects where isActive is set, these objects that are missing the isActive property (because it was deleted or never set) will not appear when iterating with the cursor.
Voila, a boolean index.
Performance notes
Opening a cursor on an index like was done in the example used here will provide good performance. The difference in performance is not noticeable with small data, but it is extremely noticeable when storing a larger amount of objects. There is no need to adopt some third party library to accomplish 'boolean indices'. This is a mundane and simple feature you can do on your own. You should try to use the native functionality as much as possible.
Boolean properties describe the exclusive state (Active/Inactive), 'On/Off', 'Enabled/Disabled', 'Yes/No'. You can use these value pairs instead of Boolean in JS data model for readability. Also this tactic allow to add other states ('NotSet', for situation if something was not configured in object, etc.)...
I've used 0 and 1 instead of boolean type.

SQLAlchemy override reflected columns dynamically

I'm using SA in a script I'll be using to periodically 'copy' a subset of mysql tables from a 'production' replica to dev/test systems. I had written code to simply reflect the source tables and meta.create_all(destination_engine). Due to the nature of FKs, I now know I need to apply use_alter=True to the ForeignKeys on the tables as I create them so that I won't get CircularDependencyErrors or other problems. I need to assume I dont know how many FK's or their names until I go through the metadata.
I'm new to SA and typically Java programmer (as you will tell :D). I tried to change the use_alter attr. iteratively at first:
tablesd = smeta.tables.items()
for tname, t in tablesd:
for c in t.columns:
for fk in c.foreign_keys:
fk.use_alter = True
smeta.create_all(to_engine)
EDIT: It's important to note that create_all() does NOT throw a CircularDependencyError after I set the use_alter property like I do above. If I remove that code, create_all() does not work. It just doesnt seem to be removing the FKs from the create...
This obviously didn't work. I then read Overriding Reflected Columns in the SA docs, sample being:
mytable = Table('mytable', meta,
Column('id', Integer, primary_key=True), # override reflected 'id' to have primary key
Column('mydata', Unicode(50)), # override reflected 'mydata' to be Unicode, autoload=True)
I'd guess reflecting each table individually then adding use_alter=True in the FK definition would work, but I CANNOT assume the names and values or # of FK's/columns. I read a lot about using DeclarativeBase to do something like this, but I'm not really sure how that would work...
How can I take my arbitrary list of tables, reflect them, then Override the use_alter option on their respective foreign keys? Am I thinking about this the wrong way?
The answer ended up being inside the problem (Imagine that...). Although each ForeignKey object has a use_alter value that can be set, Constraints also have a separate property that can be set (I was not able to find this in the API Documentation. After running it through PyDev's Debugger, I noticed the former were being set, but all the keys that had Constraints associated with them were still False. I set them to true thusly:
for fk in table.foreign_keys:
fk.use_alter=True
fk.constraint.use_alter=True
This seemed to produce the SQL I was looking for and tables were created correctly with no CircularDependencyErrors and metadata.sorted_tables seemed to work fine with no errors. I was actually able to refactor my code and do things the RIGHT way!
For anyone looking to do DB-->DB reflecting with complex FKs using SQLAlchemy, this answer and Tyler Lesmann's article are for you.
*UPDATE: * Using this method has passed a peer review and is now being used as production code. Seems to work well!