Cloudconnect CSV buffer size - csv

When I try to load a big CSV from a zip file, the execution log give me the following error:
----------------------------------------- Error details ------------------------------------------
Component [Clientes:CLIENTES1] finished with status ERROR.
The size of data buffer is only 100663296. Set appropriate parameter in defaultProperties file.
--------------------------------------------------------------------------------------------------
How can I set the appropriate parameter in defaultProperties file?
I tried this link, but my cloudconnect run configurations page is different from the link:
I've created the parameters file and filled the additional parameters with the right values like said the tutorial (code bellow) and the same error appear in the screen.
Name: -config; Value: new_buffer_size.txt
The new_buffer_size.txt content have just this line:DEFAULT_INTERNAL_IO_BUFFER_SIZE = 200000000
How can I solve this problem? I need to solve this before the world explodes.

CloudConnect is designed to develop ETL(s), which can be run on GoodData cloud workers and therefore some lower level settings are surpassed as in this case. The only legitimate way is to modify the ETL the way it can process the data with current settings. Regarding to docs, the referenced article is outdated. GoodData docs team is aware if it and they are preparing docs refactoring.
Note: As you have probably noticed, CloudConnect is being powered by Javlin's Clover ETL, therefore feel free to check their forums, as you would find there how to overcome the issue on lower level (no UI), but it would work only for data processing on the local machine.

Related

TF293000: The data warehouse has detected data conflicts for the following work item fields

Hi I'm looking for help with the following issue:
In TFS on our SSRS report server whenever I run any of the out the box Sprint Burndown reports the report seems to run successfully but I get the following error in the bottom right hand corner:
Through some research I found that the issue was due to the field definitions in that particular Collection not matching the other collections that we have in TFS. Simple...
In order to determine which field definition in the collection was the issue I used the witadmin command listfields for all of my collections:
witadmin listfields /collection:Collection /n:Microsoft.VSTS.Common.ReviewedBy
This led me to find that the Synchronizes Identity Name Changes definition in the collection mentioned in the TF293000 error was set to a value of true, while it is false in all of my other collections. Issue Found! Should be easy from here...wrong.
The following command should solve my problem:
witadmin changefield /collection:Collection /n:Microsoft.VSTS.Common.ReviewedBy /syncnamechanges:false
*of course with the proper collection url subbed in for the word Collection
However when run and after I confirm that I want to make the change I get the following error:
TF401327: The operation is not supported. The feature is obselete.
I look the error up and it takes me to this page TFS Known Issue which tells me it's a known issue but was resolved in update 1 ... we have update 3.
I then attempted to simply edit the WIT .xml file and update the attribute for that WIT on that collection with false, but when I import the change to the server it tells me it has imported successfully however when I export it I see that the file has not changed.
I have also tried copying the the .xml file from the same WIT in another collection and uploading that to the offending collection and that will not work. I've never had an issue with uploading a WIT as we've made several changes to our TFS workflow before. I'm pretty stuck at this point and just wondering if anyone else has experienced this issue before, thanks!
According to the error info, seems there is a conflict in the TFS Data warehouse and this because 2 fields in different collection has different attributes in the data warehouse as it’s only one data warehouse. To avoid schema conflicts when you export and process data to the data warehouse databases, you must assign the same values to these attributes across all collections:
Field type (the value for this field cannot be changed for an
existing field).
Reporting type.
Reporting name.
What you have done is the correct operation, change/update the attribute for the field in one project collection to match the assignments that are made in other project collections.
You could try to narrow the issue, if this issue only happened on that specific field in the team project collection. All other work item filed working correctly? Also give a try with other collections, such as change the syncnamechanges=true, then set it back to syncnamechanges=false, to see if any issue occurs.
Run the command line on TFS sever machine instead of your develop machine. Clear TFS cahce. And if the filed is not use for reporting about those project collections, you could also try to mark it as non-reportable. More details please refer below links:
Resolve data warehouse schema conflicts
Change a reportable attribute for a work item field

Changing a value in a .config file based on a user's selection in an InstallShield 2013 install

Sorry - I'm a total newbie with InstallShield. I've inherited an InstallShield 2013 project that presents the user with a dialog that let's the user select a SQL Server and based on their selection sets a value in a config file. That's not working, so I opened the project in IS and looked in the Text File Changes under System Configuration and there's nothing there that would do this. So how do I figure out where this is happening (or not happening in my case), and then how do I get it to work? I need to set both data source and initial catalog in a file called server.config.
So how do I determine what the user selected and then save that in this file? It looks like I can set up a Text File Change, but how do I access the values selected by the user? And how can I figure out where the "code" is that is supposed to be doing this?
Thanks,
Ben
I would try to track this from the dialog and controls in question, or by following the value through a verbose log. Since you say it doesn't work today, there will probably be an interruption in the flow I describe below, and since you don't know the full state of the installation project, it may be hard to identify. So search from what you know.
Top down: what gets configured
First, find the dialog that you fill out as a user making the selection. Then figure out the property that the particular control is associated with. Now you've got a thread; pull on it.
Search in the direct editor for references to the property. If the property is named MYCONFIG search for just that: MYCONFIG. You'll probably find some sort of use that looks like [MYCONFIG] instead, which is typically a format string specifying to use the value of MYCONFIG. You may also have to search all the files related to your project, as Custom Action implementations can be code stored outside of your InstallShield project.
The use may be in a ControlEvent, CustomAction, or some other table. If it's in a ControlEvent, it may be used to set another property. Ditto if it's in a CustomAction that sets properties (type 51) which may be easier to understand in the Custom Actions and Sequences view. In that case, also search for the property that gets set.
If you find it in a table like ISSearchReplace* or ISXml*, or IniFile, it's probably part of the Text Files Changes, XML File Changes, or INI File Changes, and that view should make it easier to understand.
Maybe that thread dead-ends somewhere. A property gets set, but never referenced. So try to search from the other end.
Bottom up: what gets written
If there are text file changes, xml file changes, ini file changes, or custom actions that reference the file you need updated, see where they get their information. Try to follow it back. If they're well written, you should be able to identify the property (noting that one called CustomActionData comes from a property matching the name of the custom action it's used in), and then trace that further back using the same ideas as above, but in the other direction.
Where's the problem?
If the threads don't connect, that's probably the problem. It's also possible that a custom action lacks permissions but doesn't reports a failure, or that the file name or path got misconfigured somewhere along the way. Look for small things like that if things look like they should work but don't.
It turns out that I misunderstood the problem and the project was never set up to change that value, so all I had to do was set up a Text File Change and it works perfectly. Thanks #Michael Urman for the thorough response - I really appreciate it!

use boost.log log to multiple file

I had read boost doc. But what it described are too limit for my requirement:
My project has a main logger which used to log almost all logs, it use time_based_rotation. Also I want log some message into another file, so I can check those logs independently rather than mix with main logs together.
What I want is:
BOOST_LOG_INLINE_GLOBAL_LOGGER_DEFAULT(gl, boost::log::sources::severity_logger_mt<Logger::severity_level>)
BOOST_LOG(gl::get())<<message; // the main logger
BOOST_LOG_INLINE_GLOBAL_LOGGER_DEFAULT(loggerA, boost::log::sources::logger_mt)
BOOST_LOG(loggerA)<<message; // another logger, log to anther file
The problem is: how should I set the logging core? Add a text_multifile_backend? But the usage is totally different than boost example. I feel in my requirement the file name setting should belong to logger rather than logging core, however I don't know how to do that.
Take a look at this page, You may get some idea.
My program doesn't support thread safe logging while using boost library

Verify a Tif with ApprovalTests

I have been asked to update a system where header information gets injected into a tif via a 3rd party console application. I don't need to worry about that bit.
The part I have been asked to look at it the merge process that generates the header information.
The current file generated by the process is assumed as correct, before I make any changes, so I want to add this as an approved result, from that I can then check that the changes I make will alter the file as expected.
I thought this would be a good opportunity to look at using ApprovalTests
The problem I have is that for what ever reason the links to the videos are considered corruptible (Possibly show me kittens jumping into boxes or something, which will stop me working, which ironically means I slow down my work done because I cannot see any help videos).
What I have been looking at is the Approvals.Verify and Approvals.VerifyFile extensions.
But what appears to be happening is confusing me.
using VerifyFile creates a received file, but the contents of the file are just a line the name of the file I have asked it to verify.
using Verify(new FileInfo("FileNameHere")) does not appear to generate the received file that I need to flag as approved, but the test does return saying that it cannot find the approved tif file.
I am probably using VerifyFile completely wrong and might be looking at using Verify wrong as well.
useful info?
Might be useful to know, that as this is a legacy application, running as a windows service, I have wrapped the service in a harness that allows me to call the routines, so the files are physically being written elsewhere on the machine outside of my control (well there is a config, but the return of the service I call generates a file in a fixed location if it is successful). I have tried copying that into the Unit Test project, but that doesn't appear to help.
Verify(File) and VerifyFile(string) are both meant to verify an existing file. As such they merely setting the received file to the file you pass in. You will still need to move/approval/create the approved file.
Here is the pseudo code and process.
[UseReporter(typeof(DiffReporter), typeof(ClipboardReporter)]
public void TestTiff()
{
string tif = YourProcessToCreateTifFile();
Approvals.VerifyFile(tif);
}
[Note: if you don't have an image diff installed, like TortoiseDiff, you might want to use the FileLauncherReporter]
Run this, once you get the result, move the file over by pasting your clipboard into a cmd window.
It will move the temporary tif to your test directory with the name ClassName.TestTiff.approved.tif
After that the test should pass until something changes.
Happy Testing!

SSIS: Data Conversion Package failed

I was trying to load a table from Excel Source to SQL Server Destination with a data conversion transformation task using SSIS Package. but when I ran the package, it failed with the following error messages
===================================
Failure saving package. (Microsoft Visual Studio)
------------------------------
Program Location:
at Microsoft.DataTransformationServices.Design.Serialization.DtrDesignerSerializer.SerializePackage(IDesignerSerializationManager manager, Package package, TextWriter textWriter)
at Microsoft.DataTransformationServices.Design.Serialization.DtrDesignerSerializer.SerializeComponent(IDesignerSerializationManager manager, IComponent component, Object serializationStream)
at Microsoft.DataWarehouse.Serialization.DesignerComponentSerializer.Serialize(IDesignerSerializationManager manager, Object value)
at Microsoft.DataWarehouse.VsIntegration.Designer.Serialization.DataWarehouseDesignerLoader.Serialize()
at Microsoft.DataWarehouse.VsIntegration.Designer.Serialization.BaseDesignerLoader.Flush(Boolean forceful)
at Microsoft.DataWarehouse.VsIntegration.Designer.Serialization.BaseDesignerLoader.Flush()
at Microsoft.DataWarehouse.VsIntegration.Designer.Serialization.DataWarehouseContainerManager.OnBeforeSave(UInt32 docCookie)
===================================
An invalid character was found in text content.
(msxml6.dll)
------------------------------
Program Location:
at Microsoft.SqlServer.Dts.Runtime.Package.SaveToXML(String& packageXml, IDTSEvents events)
at Microsoft.DataTransformationServices.Design.Serialization.DtrDesignerSerializer.SerializePackage(IDesignerSerializationManager manager, Package package, TextWriter textWriter)
===================================
An invalid character was found in text content.
(msxml6.dll)
------------------------------
Program Location:
at Microsoft.SqlServer.Dts.Runtime.Wrapper.IDTSPackagePersist100.SavePackageToXML(Object& pvDestination, Boolean vbReturnDOM, IDTSEvents100 pEvents)
at Microsoft.SqlServer.Dts.Runtime.Package.SaveToXML(String& packageXml, IDTSEvents events)
I had the same issue, and found that the file I was attempting to import had a non-printing character at the end of the second header. Removing this fixed the issue.
Since you're looking to resolve this "urgently", I'll make both a short-term and long-term suggestion:
1) For the short-term, a practical approach:
Does it work, in your situation, to load the spreadsheet in sections?
Will it hurt anything, for you to re-try/re-do the load multiple times, to find the problem?
Or, can you try this load in a test environment?
If so, then try loading the first-half, or the first-1/4, or first 10 rows, etc., until you find the row that's causing the problem. At the very least, you can then load everything but that row... at best, maybe the problem will be easy to find, at that point.
2) For the long-term... do you (or somebody there) have access to modifying the SSIS package?
If so, then I recommend coding error row redirection, within the Data Flow Task: rows with problems go to another destination (red arrow), while rows without problems go to the usual destination (green arrow).
For more on error row redirection, see these links:
A) Intro: Handling Errors in Data
B) Some issues others have run into:
Programmatically configuring error and truncation dispositions for row redirection
Hope that helps...