I'd like to use a CSV file as a lookup table to update some attributes.
So I figured the LookUpAttribute processor was what I needed. I configured it with as SimpleCsvFileLookupService as the Lookup Service, but I can't get it to work yet.
My SimpleCsvFileLookupService is configured but stays in a "enabling" state, and the LookUpAttribute processor still tells me it's "invalid because performing validation depends on referencing a Controller Service that is currently disabled".
I dont understand why it doesn't enable. Has somebody used these components ? Thx
Edit :
I didn't see the message in the left. It says the mapping for "1" is not found ("1" is set as the lookup key column and in the csv the header row is "1;2;3;4;5;6;7;8".
What am I missing ? I can't find any explanation as to how to use this controller service.
Edit2 : The SimpleCsvFileLookupService properties
Edit3 : Extract of the csv file
Related
I have to migrate data from CRM Business Central into an Azure SQL database. The source data comes from REST API. I created a linked service related to it. Then I created a copy activity with the following:
The preview works. I get data in a JSON format. For the mapping tab, I tried to import the schema and set the field "value" as an array. I got the following result:
However, on the right side of the mapping, I can only see "#odata.context" proposed as a "sink mapping". I overwrote it by writing the right fields. When I run the pipeline with "debug mode", the pipeline is not triggered and I received a "Bad Request":
The error comes from the mapping. My question is: how does the "Import schema" work in case of JSON data? Do I have to import manually the schema?
Change the sink column #odata.context to other name by excluding the special characters.
I have reproed with a sample rest API and got the same error when I used the special characters (#, .) in the sink column ex :#data.id.
I have changed the sink column name ex: data_id to exclude special characters and the pipeline ran successfully.
I'm getting this exception when trying to ->write() a DataObject called 'ModelSheet', it says the name should be Models\ModelSheet instead of ModelSheet only (i am under the same namespace (Models) and even try with an use statement)
Hi Guilherme and welcome to stackoverflow,
it seems that the ClassName saved to your database record does not match your PHP classname.
When changing classnames (adding or changing a namespace is changing the classname), you need to update the database to reflect this changes, as the classname is saved in the DB, so Silverstripe knows which PHP-Object is related to the data record.
If you used Silverstripe's upgrader tool, you should have an .upgrade.yml in your module's directory (e.g. in app or mysite). If not, you can add it manually (see e.g. https://github.com/wernerkrauss/silverstripe-onepage/blob/master/.upgrade.yml as a random example). The structure is like
mappings:
OldClassName: My\Namespace\NewClassname
After that all you need is to run dev/build/?flush and your database should be updated.
I am doing an HTTP GET request to /maximo/oslc/os/mxsr and using the oslc.select query string parameter to choose:
*,doclinks{*},worklog{*},rel.commlog{*},rel.woactivity{*,rel.woactivity{*}}
This lets me get related data, including related worklogs, but the worklog does not include the 'description_longdescription' field.
The only way I seem to be able to get that field is if I do a separate HTTP GET to query a worklog id directly through /maxrest/rest/mbo/worklog . Then it provides the description_longdescription field.
I understand this field is stored separately through the linked longdescription table, but I was hoping to get the data through the "next gen" oslc api with one http get request.
I've tried putting in 'worklog{*,description_longdescription}', as I read somewhere that longdescription is a "non-persistent" field and must be explicitly named for inclusion, but it had no effect.
I figured out that for the /maximo/oslc/os/mxsr object in the API, I needed to reference the related MODIFYWORKLOG object through the rel.modifyworklog syntax in the oslc.select query string:
oslc.select=*,doclinks{*},rel.modifyworklog{*,description_longdescription},rel.commlog{*},rel.woactivity{*,rel.woactivity{*}}
I also had to explicitly name the non-persistent field description_longdescription for it to be included.
Ref. for the "rel." syntax: https://developer.ibm.com/static/site-id/155/maximodev/restguide/Maximo_Nextgen_REST_API.html#_querying_maximo_asset_management_by_using_the_rest_api
I have a google chart and want to add a custom tooltip. I found some great answers like this this site and set about doing this with roles. I also found this link about it and it looked like the best way.
My data is being generated via json and I use a php file to create a json feed. The rows I have coded like this
{"cols": [ {"id":"","label":"Period","pattern":""},
{"id":"","label":"Recorded P/L","type":"number", "role":"data"} ,
{"id":"","label": null,"type":"string", "role":"tooltip"},
{"id":"","label":"Best Available P/L","type":"number", "role":"data"},
{"id":"","label": null,"type":"string", "role":"tooltip"}
]
Then it goes on and adds all the data. The problem is when I try to run this I get the error
All series on a given axis must be of the same data type
I have checked the json and that is formed correctly but am not sure what I could be doing wrong.
At least part of your problem is that you're not specifying the type for your first column.
When developing a Data Flow I don't always want to output the results to a destination - but I would like to see the data.
Is there a way to attach a Data viewer to an output without having to have a destination?
The file and raw destination have limitations on the data type they accept - and I don't want to attach conversions just to test/build code.
Is there some kind of output to null ? i could then get a data view on the result set
There is a (free) custom "trash" destination available from a third party:
http://www.sqlis.com/post/Trash-Destination-Adapter.aspx
I usually use Export column transform - if left with default configuration, it does nothing, so it is equivalent to the custom "Trash" destination mentioned by Ed, but you don't have to install anything.
When debugging and wanting to view the data in the buffer, I usually throw in a Union All and connect it below the component who's output I want to see. Then add a Data Viewer on the connector and voila, there it is!
There isn't really. You can use a RecordSet Destination, or a Row Count transform instead.