How to debug system error in google data studio? - google-apps-script

I'm working on a community connector, my fields are getting pulled in properly, but trying to use it in a report I get the following error below.
I'm less concerned about the specific error, but more concerned how I even figure out what is going on or breaking on the server.
Anyone figure out good ways to debug these errors?

After a lot of research on this and digging around. I found that it came down to an invalid schema. ugh. I noticed that the getData, wasn't even being ran when trying to use data from a report. This made me think it was failing elsewhere.
In my case I was a prebuilt json object, and passing that to the schema field for data studio.
Unfortunately google provides no feedback for mis-configured JSON schemas.
:sigh:
I simplified the schema and found the issue was a incorrect data type. Once I fixed this all worked :)
This method is mentioned here, and even google data studio says its hard to debug.
https://developers.google.com/datastudio/connector/semantics
Its nice to have the schema seperate from code, but careful that the schema is correct otherwise you'll run into this very generic issue. Until they add more logging in this area.
Hope this helps someone!

Sharing my discover here just in case, I was with the same case, but in my case this problem was occurring in a Pie Chart, but the same Metric combined with another segmentation didn't appear this problem.
After some test I found which case generates the problem, my problem was happening because this metric, in a specific segmentation, generates a total value negative, what Pie Charts have problems to show.
My field has string and numeric values too, but this is not a problem (until now at least). To solve my problem I create a new field where negative values are replaced by 0 (this doesn't make impacts at insight I need, because this values I don't need to track or show them in a Pie Chart).
So my suggestion here is try to understand if it is happening with a specific segmentation, or all time. With this segmentation, try to create filters and display each segment only, if this problem occurs you will discover which segment creates the error. Try to check the field you are making of sum, mean, or anything else, if he have negative values. If it happens, Pie Chart is not a possibility to show that, or you need to filter, replace, make any rule.

Related

what are 'NET USE' possible outputs?

The question is : what are the NET USE possible outputs?
You can drown yourself with websites explaining how to use NET USE command, but not a single one about what is coming out of it.
In my case I'm interested in the various error messages, and the interaction with the Powershell automatic variable $LASTEXITCODE. I want to handle its output correctly but I don't know what can even happen (and no, I won't use New-PSDrive).
Does someone knows the what or where I can find the information ?
Thanks
You can use the example in https://www.compatdb.org/forums/topic/20487-net-use-return-code/ to obtain a list of the numerical codes for your evaluation.
If you want to dig deeper take you need to download the Win32 SDK and go through the definitions in the header files (see https://learn.microsoft.com/en-us/windows/win32/api/winnetwk/ns-winnetwk-netresourcea etc).

ABBYY Flexicapture 'Internal Program Error' on repeating group

Something happened to corrupt an ABBYY FlexiCapture project of mine. In the Document Definition Editor, when I expand a couple of my repeating groups, I get:
Internal Program error:
.\DocumentTemplate\TemplateNode.cpp, 626.
Then I can click OK and some of the children elements will load and some will not. This means I cannot update rules in the ones that don't show up.
Is there a known way to fix this? Right now I'm rebuilding the whole document definition.
I'm using the distributed version of the software.
InternalProgramError is usually some kind of unhanded error, thus reference to the CPP file code line. I have no information regarding this specific error, unfortunately, but the fact that it worked before and started to produce the error some time after use indicates that the setup was correct but got corrupted at some point. Otherwise, if you made a change to the template, and started to see that error, undoing that change is the way to go. Usually it is pretty rare to cause these errors in the template modification in the first place, unless you are doing something noncustomary with the UI, or in scripts.

JSON Circular Structure in Entity Framework and Breeze

I have a horribly nested Entity Framework structure. A Schedule holds multiple defaults and multiple overrides. Each default/override has a reference back the schedule and a "Type". The Type has a reference back to any defaults or overrides it belongs to. It's messy, but I think it's probably the only way I can do what's required.
This data ends up in a browser in the form of Breeze entities. Before I can process these when saving them back at the server, I have to turn them back into JSON, which unsurprisingly trips the dreaded "Uncaught TypeError: Converting circular structure to JSON".
Now there are a number of perfectly good scripts for removing these circular structures. But all of them seem to replace the circular references with some sort of placeholder so they can be re-created as objects. But of course Entity Framework doesn't recognise these, so can't work with them.
I'm at a loss as to what to do at this point. Simply removing the circular references rather than replacing them doesn't seem to help as it can result in structures shorn of important data.
I've also tried changing my EF queries to only get back specifically the data required, but it insists on giving me absolutely everything, even though Lazy Loading is set to false and I have no .Include statements in my queries. But I feel this is solving the wrong problem, since we're bound to want to deal with complex data at some point.
Is there any other way round this?
EDIT: I've solved this temporarily by investigating the object and removing the circular properties by name. But I'd still like a generic solution if at all possible.
Seems like you are after serialization mode. find out serialization mode in properties in your designer screen and set it to unidirectional. this will solve your serialization issue.
Hope that helps!!!
Not sure I understand the question. You should never experience any kind of circularity issue, regardless of the complexity of your model, with a Breeze SaveChanges call. (Breeze internally unwraps all entities and any circularities before serializing). If you are seeing something different then this would be a bug. Is this the case?

Report created by ReportViewer shows in Excel an error message

If I generate an excel-report, excel 2010 shows the following warning message:
file error: data may have been lost
Note: I already have found the solution and will post it immediately. I make this entry for other people having the same error.
It turned out, that the reports data source had in a value of -0 (negative zero). The datatype was decimal. Excel can not handle this.
The problem seems to lie in the Excel formula engine and not in the report renderer (However I think MS has to solve the problem in the report renderer).
http://connect.microsoft.com/SQLServer/feedback/details/680863/negative-zero-causes-file-error-data-may-have-been-lost-in-excel-2010-when-exporting-ssrs-report
I was facing similar issue where I changed embedded images into external images. In this change some of the images Bit depth property remain as 32 (Click on Property -> Details tab).
I changed it Bit depth to 24 by using imagemagick utility (http://www.imagemagick.org/script/binary-releases.php).
I used "convert -depth 24 oldimage.bmp newimage.bmp" command to change Bit depth property.
And this solve my problem.
I know this is not solution for your problem. But if somebody come across this post while searching that might help them.

SSIS Common rownumber for both outputs on a flatfile source

I have a small problem (I assume...)
I'm loading a flatfile (csv) and I want to add a rownumber to the dataflow. Using the RowNumber transforation works good for both output paths (source and error) individually. But what if you want to use the same rownumber in both paths to be able to track where (in the file) an error occured. I have scratch my head long enough now and I'm just throwing it out here since I'm pretty sure other people has tumbled across this one...
I have tried the script transformation which seems to work for a while but then it hangs the load.
Any suggestion on how to solve this issue is greatly appreciated.
If I understand you correctly, dynamically generating the number with a script component for the dataflow is not a problem for you.
What I would recommend you is to adopt the following philosophy for stable etl processes coming from files:
Never cast anything in the connector, just import the fields as nvarchars of the maximum lenght they will achieve.
Cast and control each column to your specification.
If a row cannot be read, you will not know the index, but you will know that the file is malformed (extremely rare in my experience, for half transferred files), and it should be rejected anyway.
A quick screenshot of a part of a file loading process shows how the rejection (after assigning row_id) can work (link to dataflow image). To this you can add further countless checks (duplicates...) and even have a repository for the loaded files to check upon the rejects and whatever else you might want to control (Link to control flow image).
In some of my processes, I even use a flat file connector and just import each row as a bulk text and then split it in columns with an intermediate script component, allowing for different versions of the columns in the files.
Anyway, sorry not to be more detailed (due to my status I can't add more links or any images), but I hope that you understand the concept.
Regards,
Francisco.