XML Import to WordPress Database - mysql

What is the best way to keep IDs unique when importing existing IDs into a database that may cause conflicts? E.g:
Example XML to import into the WordPress taxonomy database;
<CategoryData>
<Category>
<id>1</id>
<parent_id>0</parent_id>
<name>maincategory1</name>
<desc>the main 1</desc>
<keyword />
<url />
<image_location>1.jpg</image_location>
<sort_order>9999</sort_order>
</Category>
<Category>
<id>2</id>
<parent_id>0</parent_id>
<name>maincategory2</name>
<desc>the main number 2</desc>
<keyword />
<url />
<image_location>2.jpg</image_location>
<sort_order>9999</sort_order>
</Category>
</CategoryData>
However, the WordPress database already has IDs 1-20 (for example) used up by typical post categories for news, etc. If I keep receiving XML like this to update/add to my WooCommerce categories, how can I allow both the WordPress post categories, and XML WooCommerce shop categories to co-exist without overwriting the existing IDs?
I'm all out of ideas except for asking the provider of the XML to change his IDs to begin at 100 or something.

You could save the XML-id as a custom-meta-field. You would have to check if there is any term with a specific xml-id and if not, you create a new term. Otherwise you can update the existing one.
Add meta fields to wp-terms: https://codex.wordpress.org/Function_Reference/add_term_meta

Related

convert my sql query to queryexpression or fetchxml in crm

I have this SQL query where I am trying to fetch the opportunityId from opportunity entity for whom approvaldocument has not been created (approval document is the name of the other entity) .I dont think fetchxml supports such kind of query. I am new to crm and my project is in crm 4.0 version.
here's the sql query:
Select OpportunityId from opportunity AS c left JOIN (select a.opportunitynameid from opportunity o
JOIN ApprovalDocument a ON
a.opportunitynameid=o.OpportunityId) AS b ON c.OpportunityId=b.opportunitynameid
Where b.opportunitynameid IS NULL and statecode=0
I converted this into a fetchxml, but that didn't give the correct result.
<fetch version="1.0" output-format="xml-platform" mapping="logical" distinct="true">
<entity name="opportunity"> <attribute name="opportunityid" />
<link-entity name="approvaldocument" from="opportunitynameid" to="opportunityid" alias="a" link-type="outer"> <attribute name="opportunitynameid" />
</link-entity>
<filter type="and">
<condition entityname="a" attribute="opportunitynameid" operator="null" />
</filter>
</entity>
<fetch/>
Natively it is not possible to create an advanced find to query for the absence of a relationship. However there are several different solutions for achieving this functionality:
Workaround:
Create a marketing list with the full set of records and then remove records using the inverse of your condition. The steps for doing this are nicely laid out in this article.
Modifying FetchXML and Third Party Solutions:
Although the advanced find cannot show "Not In" results, the underlying FetchXML does support this functionality. An example of manually building such Fetch is shown here. Also there are several third party tools which leverage this ability to provide Not In functionality directly in the advanced find. The best solution I am aware of is available here.

How to identify unique entries in ccda file?

Basically, if user uploads same c-cda document again or other documents containing same entries of like medications, vitals, allergies, surgeries, etc than I want to make sure they do not get duplicated in database and want to skip those from inserting again.
Each entry inside an HL7 CDA could have an id attribute, which definition form HL7 V3 RIM is:
3.1.1.3
Act.id :: SET (0..N)
Definition:A unique identifier for the Act.
Use it in order to uniquely identify you entries, and avoid duplicates.
This element is not mandatory, but if you are implementing C-CDA, this template for substance administration specifies that this element is mandatory, so you should ask document sender to inform it. Here is a substance administration example form C-CDA:
<substanceAdministration classCode="SBADM" moodCode="EVN">
<templateId root="2.16.840.1.113883.10.20.22.4.16"/>
<id root="cdbd33f0-6cde-11db-9fe1-0800200c9a66"/>
<text>
<reference value="#med1/>
Proventil 0.09 MG/ACTUAT inhalant solution, 2 puffs QID PRN wheezing
</text>
<statusCode code="completed"/>
<effectiveTime xsi:type="IVL_TS">
<low value="20110301"/>
<high value="20120301"/>
</effectiveTime>
<effectiveTime xsi:type="PIVL_TS" institutionSpecified="true" operator="A">
<period value="6" unit="h"/>
</effectiveTime>
...
Martí
martipamies#vico.org

DbUnit: how to assert generated IDs

I need an idea/tip how to use DbUnit to assert IDs, generated by a database (e.g. MySQL's auto increment column). I have very simple case, which yet, at the moment, I find problematic:
2 tables: main and related. main.id column is an auto-increment. Related table has a foreign key to it: related.main_id -> main.id. In my test case my application does insert multiple entries into both tables, so the dataset looks similar to this:
<dataset>
<main id="???" comment="ABC" />
<main id="???" comment="DEF" />
<related id="..." main_id="???" comment="#1 related to ABC" />
<related id="..." main_id="???" comment="#2 related to ABC" />
<related id="..." main_id="???" comment="#3 related to DEF" />
<related id="..." main_id="???" comment="#4 related to DEF" />
</dataset>
As the order, how the inserts will be performed is unclear - I cannot simply clear/truncate the table before the test and use predefined IDs in advance (e.g. "ABC" entry will come at first so it gets ID 1 and "DEF" as 2nd - gets 2). If I write test such way - this will be wrong - with a bit of luck sometimes it may work and in other cases not.
Is there a clean way how test such cases? As I still want to assert that entries were created and linked properly in DB, not only that they exists (if I would simply ignore the auto-increment columns).
Based on the comments of the question, I am answering my own question, so this may help others, looking for similar solution.
After all we did skip asserting generated IDs, as they were not really interesting for us. What we actually did want to check is that the entries between main and related tables are "properly linked". To achieve this, in our unit test we did created the the dataset using query, joining both tables:
SELECT main.comment, related.comment AS related_comment
FROM main, related
WHERE main.id = related.main_id
Then we assert, that dataset produced by this query matches statically defined dataset:
<dataset>
<result comment="ABC" related_comment="#1 related to ABC" />
<result comment="ABC" related_comment="#2 related to ABC" />
<result comment="DEF" related_comment="#3 related to DEF" />
<result comment="DEF" related_comment="#4 related to DEF" />
</dataset>
When the datasets are matching, we can assume, that entries were "linked properly".
Maybe you let dbunit sort your table main by id and table related by id automatically. Since the absolute number of rows are known in advantage this should solve your problem.
DBUnit allows sorting with org.dbunit.dataset.SortedTable.SortedTable which needs a table an a list of colums which should be sorted. JavaDoc of SortedTable

Best way to use a custom table in a relationship (in ExpressionEngine)?

So, I’m a bit on how to use a separate table in a relationship, or something like that…
I have a table with around 5000 hotels called exp_h_hotels.
On my website, I use the pages module to create a static subpage for each part of the country. I want to list all hotels that belong to a specific region.
I have understood that I can’t do something like this (using ExpressionEngine tags with the query module):
{exp:query sql="SELECT * FROM exp_h_hotels WHERE h_regionname ='{regionname}'"}
{hotel_name}
{/exp:query}
Anyone knows the best way to go forward with this?
I have looked into using the ExpressionEngine API to insert the data into a channel – however, I get the feeling that it wouldn’t be optimal to flood the channel entries table with 5000 posts with 14-20 fields with data each.
There's no reason why this shouldn't work as you expect, so long as your exp:query tag is inside your channel:entries tag:
{exp:channel:entries channel="pages" limit="1"}
<h1>Hotels in {regionname}</h1>
<ul>
{exp:query sql="SELECT * FROM exp_h_hotels WHERE h_regionname ='{regionname}'"}
<li>{hotel_name}</li>
{/exp:query}
</ul>
{/exp:channel:entries}
However, for the long-term, importing your hotels into a new channel in EE is a much better plan. You could export from your database to CSV (using phpMyAdmin perhaps) and then import into EE using DataGrab. Nothing wrong with adding 5000 new entries to EE.

Parsing log files in a folder in ColdFusion

The problem is there is a folder ./log/ containing the files like:
jan2010.xml, feb2010.xml, mar2010.xml, jan2009.xml, feb2009.xml, mar2009.xml ...
each xml file would like:
<root><record name="bob" spend="20"></record>...(more records)</root>
I want to write a piece of ColdFusion code (log.cfm) that simply parsing those xml files. For the front end I would let user to choose a year, then the click submit button. All the content in that year will be show up in separate table by month. Each table shows the total money spent for each person. like:
person cost
bob 200
mike 300
Total 500
Thanks.
The short answer is that, if your XML is correctly formatted, you can use the XMLParse() function to get the XML into a CF data object.
Sergii pointed out that XMLParse cna take a path, so you can simply read the file directly into it and assign the results to a variable.
The data should look something like an array of structures. Use CFDUMP on your CF data object to view it and help you figure it out.
I would strongly urge you to check out Microsoft's "Log Parser" if you're on Windows. It provides essentially a SQL-like query interface to all manner of log files. There's an app called "Log Parser Lizard" which provides a GUI for testing out "queries", and then you can cfexecute logparser with the queries you come up with.
Not sure if there's an equivalent for linux.
If you're on Windows and interested in hearing more, let me know. I had it hooked into an ANT task that downloaded log files every night, parsed them, generated reports, etc. It worked very well.
Concrete example:
<CFSET year = 2011 />
<CFDIRECTORY directory="#expandpath("log")#" action="list" sort="name" name="logfiles" filter="*#year#.xml" />
<CFOUTPUT query="logfiles">
<CFSET singlelogfile = xmlparse(directory & "/" & name) />
<CFSET records = XmlSearch(singlelogfile, "//record") />
<table>
<tr><td colspan="2">Month: #left(logfiles.name,3)#</td></tr>
<CFLOOP array="#records#" index="record">
<tr><td>#record.XmlAttributes.name#</td><td>#record.XmlAttributes.spend#</td></tr>
</CFLOOP>
</table>
</CFOUTPUT>
Of course you would need to change that the year comes from FORM-Scope, sum up multiple records for each person and maybe (if you can control it) change the filename from logs to 01-2011,02-2011,03-2011,..12-2011 so that they're getting directly sorted correctly.