Getting error when retrieve contacts from Exact Online - exact-online

I want to retrieve all contact persons from one company in Exact Online and get the following error:
select * from AccountContacts
Error:
itgenusg026: The requested number of 3.308 columns is not supported. Restrict the number of requested columns to at most 1.000 columns.
Type: Invantive.Configuration.ValidationException
bij Invantive.Configuration.ValidationException..ctor(String errorCode, String errorMessage, String kindRequest, String localStackTrace, String nk, Exception innerException)
bij Invantive.Producer.Windows.Forms.UltraGridExtensionMethods.AddColumnsToDataGrid(UltraDataColumnsCollection dataBandColumns, ResultSet results, Dictionary`2& columnNameIdMap, String[] excludedColumnNames, Func`2 allowEdit)
bij Invantive.Producer.Windows.Forms.QueryTool.ExecuteStatement(IProgressNotifier notifier, String statement, ParameterList bindVariables, Boolean showResultsInGrid, Boolean showStatistics, Boolean memorizeStatisticsInSqlHistory, Boolean allowPaging)
bij Invantive.Producer.Windows.Forms.QueryTool.FetchResultsFromSql()
bij Invantive.Producer.Windows.Forms.QueryTool.<>c__DisplayClass107_0.<FetchData>b__0()
bij System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
bij System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
bij System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
bij System.Threading.ThreadHelper.ThreadStart()
bij Invantive.Producer.Windows.Forms.UltraGridExtensionMethods.AddColumnsToDataGrid(UltraDataColumnsCollection dataBandColumns, ResultSet results, Dictionary`2& columnNameIdMap, String[] excludedColumnNames, Func`2 allowEdit) in File969:regel 368
bij Invantive.Producer.Windows.Forms.QueryTool.ExecuteStatement(IProgressNotifier notifier, String statement, ParameterList bindVariables, Boolean showResultsInGrid, Boolean showStatistics, Boolean memorizeStatisticsInSqlHistory, Boolean allowPaging) in File948:regel 2847
bij Invantive.Producer.Windows.Forms.QueryTool.FetchResultsFromSql() in File948:regel 2437

The error 'The requested number of 3.308 columns is not supported. Restrict the number of requested columns to at most 1.000 columns.' is raised by the Invantive Query Tool when the result set from the database has more than 1.000 columns.
The grid used becomes extremely slow when there are multiple thousands of columns.
However, you still will want to see the contents. There are currently three options:
Option 1: determine the column list using the old Exact Online (XML) provider.
Option 2: use F4 (Describe columns) on the table name.
Option 3: use 'Show/Hide empty columns' in the grid tool bar.
The reason of this problem is that the XSD of Exact Online describes all possible output formats. But in facts only at most 200 of the sometimes hundred thousands of possible fields are ever returned. There is no documentation what fields are available in some scenarios, so a SQL developer just needs to analyze the output for the relevant fields and their values.
Similar problems also occur for other ERP platforms such as Twinfield or anything else with an extensive XSD with references between nodes at various levels in the XSD.
Option 1: determine column list
Option 1 requires you to first log on using the old Exact Online (XML) provider. It uses Invantive SQL v1 as described on Invantive SQL grammar versions. This SQL version is not completely ANSI compatible; when empty it removes fields from the result set which have no value. This cuts down the number of fields by approximately a factor of 1.000.
Then you just copy the field names using right click on the cell or using export format 'SQL select' and replace the '*' in the query by the relevant fields.
After that you can switch back to the new Exact Online combined provider for XML and REST APIs and continue working.
Option 2: use F4 (Describe columns) on the table name.
As an alternative, you can also get a list of available fields. These are available in the data dictionary view systemtablecolumns. Or more user friendly in the pop up you get after positioning the cursor on the table name and pressing F4 or choosing 'Describe' from the Editor menu:
Option 3: use 'Show/Hide empty columns' in the grid tool bar.
As a last alternative, you can instruct the grid to only shown columns where some row has a non-empty value. The results grid has a button as shown below for that. Remember to activate it before running the query.

Related

Convert List<dynamic> to List<String>

I am getting data from server. The run runtimeType shows that they have type List.
Currently I am using cast<String>() to get List<String>.
But is it's only\right way?
var value = await http.get('http://127.0.0.1:5001/regions');
if(value.statusCode == 200) {
return jsonDecode(value.body)['data'].cast<String>();
}
There are multiple ways, depending on how soon you want an error if the list contains a non-string, and how you're going to use the list.
list.cast<String>() creates a lazy wrapper around the original list. It checks on each read that the value is actually a String. If you plan to read often, all that type checking might be expensive, and if you want an early error if the last element of the list is not a string, it won't do that for you.
List<String>.from(list) creates a new list of String and copies each element from list into the new list, checking along the way that it's actually a String. This approach errs early if a value isn't actually a string. After creation, there are no further type checks. On the other hand, creating a new list costs extra memory.
[for (var s in list) s as String],
[... list.cast<String>()],
<String>[for (var s in list) s],
<String>[... list] are all other ways to create a new list of strings. The last two relies on implicit downcast from dynamic, the first two uses explicit casts.
I recommend using list literals where possible. Here, I'd probably go for the smallest version <String>[...list], if you want a new list. Otherwise .cast<String>() is fine.

Do Couchbase reactive clients guarantee order of rows in view query result

I use Couchbase Java SDK 2.2.6 with Couchbase server 4.1.
I query my view with the following code
public <T> List<T> findDocuments(ViewQuery query, String bucketAlias, Class<T> clazz) {
// We specifically set reduce false and include docs to retrieve docs
query.reduce(false).includeDocs();
log.debug("Find all documents, query = {}", decode(query));
return getBucket(bucketAlias)
.query(query)
.allRows()
.stream()
.map(row -> fromJsonDocument(row.document(), clazz))
.collect(Collectors.toList());
}
private static <A> A fromJsonDocument(JsonDocument saved, Class<A> clazz) {
log.debug("Retrieved json document -> {}", saved);
A object = fromJson(saved.content(), clazz);
return object;
}
In the logs from the fromJsonDocument method I see that rows are not always sorted by the row key. Usually they are, but sometimes they are not.
If I just run this query in browser couchbase GUI, I always receive results in expected order. Is it a bug or expected that view query results are not sorted when queried with async client?
What is the behaviour in different clients, not java?
This is due to the asynchronous nature of your call in the Java client + the fact that you used includeDocs.
What includeDocs will do is that it will weave in a call to get for each document id received from the view. So when you look at the asynchronous sequence of AsyncViewRow with includeDocs, you're actually looking at a composition of a row returned by the view and an asynchronous retrieval of the whole document.
If a document retrieval has a little bit of latency compared to the one for the previous row, it could reorder the (row+doc) emission.
But good news everyone! There is a includeDocsOrdered alternative in the ViewQuery that takes exactly the same parameters as includeDocs but will ensure that AsyncViewRow come in the same order returned by the view.
This is done by eagerly triggering the get retrievals but then buffering those that arrive out of order, so as to maintain the original order without sacrificing too much performance.
That is quite specific to the Java client, with its usage of RxJava. I'm not even sure other clients have the notion of includeDocs...

MUMPS can't format Number to String

I am trying to convert larg number to string in MUMPS but I can't.
Let me explain what I would like to do :
s A="TEST_STRING#12168013110012340000000001"
s B=$P(A,"#",2)
s TAB(B)=1
s TAB(B)=1
I would like create an array TAB where variable B will be a primary key for array TAB.
When I do ZWR I will get
A="TEST_STRING#12168013110012340000000001"
B="12168013110012340000000001"
TAB(12168013110012340000000000)=1
TAB("12168013110012340000000001")=1
as you can see first SET recognize variable B as a number (wrongly converted) and second SET recognize variable B as a string ( as I would like to see ).
My question is how to write SET command to recognize variable B as a string instead of number ( which is wrong in my opinion ).
Any advice/explanation will be helpful.
This may be a limitation of sorting/storage mechanism built into MUMPS and is different between different MUMPS implementations. The cause is that while variable values in MUMPS are non typed, index values are -- and numeric indices are sorted before string ones. When converting a large string to number, rounding errors may occur. To prevent this from happening, you need to add a space before number in your index to explicitly treat it as string:
s TAB(" "_B)=1
As far as I know, Intersystems Cache doesn't have this limitation -- at least your code works fine in Cache and in documentation they claim to support up to 309 digits:
http://docs.intersystems.com/cache20141/csp/docbook/DocBook.UI.Page.cls?KEY=GGBL_structure#GGBL_C12648
I've tried to recreate your scenario, but I am not seeing the issue you're experiencing.
It actually is not possible ( in my opinion ) for the same command executed immediately ( one execution after another) to produce two different results.
s TAB(B)=1
s TAB(B)=1
for as long the value of B did not change between the executions, the result should be:
TAB("12168013110012340000000001")=1
Example of what GT.M implementation of MUMPS returns in your case

Assert.assertEquals junit parameters order

The order of the parameters for Assert.assertEquals method in JUnit is (expected, actual)
Although in another thread someone said that is for no reason, in one of my Java classes in Uni the professor mentioned a specific reason of that ordering, but I don't remember it.
Anybody can help me out with this?
Proper labeling in tools/failure results - The tools are following this order and some GUI tools will label which value is the expected value and which value is the actual value. At the very least, it will minimize confusion if the labels match the values; at worst, you spend time/effort tracking down the wrong issue trying to trace the source of the actual value that wasn't actually the actual value.
Consistency across assertEquals usage - If you aren't consistent in your order throughout your assertions, you can confuse future-you (or other future maintainer) if the values are swapped arbitrarily from case-to-case, again lending to potential confusion.
Consistent parameter ordering across assert methods - It may be reversible for assertEquals, but the order may matter for other assert* methods (in JUnit's built-ins and in other supporting code/libs). Better to be consistent across them all.
Changes in future - Finally, there may be a difference in a future implementation.
*Technical* - Its the expected value's equals method that is used:
There's one subtle difference after looking at the code. Many of the uses of assertEquals() will end up running through this method:
115 static public void assertEquals(String message, Object expected,
116 Object actual) {
117 if (expected == null && actual == null)
118 return;
119 if (expected != null && isEquals(expected, actual))
120 return;
...
128
129 private static boolean isEquals(Object expected, Object actual) {
130 return expected.equals(actual);
131 }
Its the equals method of the expected value that is used when both objects are non-null. One could argue that you know the class of the expected value (and thus know the behavior of the equals method of the expected value's class) but you may not necessarily know for certain the class of the actual value (which in theory could have a more permissive equals method). Therefore, you could get a different result if you swap the two arguments (i.e. the two different classes' equals methods are not reflexive of each other):
A contrived case would be an expected value of an ArrayList and an actual value that could return any type of Collection instance, possibly an ArrayList, but also possibly an instance of a custom Collection non-List class 'Foo' (i.e. Foo does not implement List). The ArrayList's equals method (actually its AbstractList.equals) specifies:
Returns true if and only if the specified object is also a list, both
lists have the same size, and all corresponding pairs of elements in
the two lists are equal.
Perhaps 'Foo' class's equals method is more permissive specifying:
Returns true if and only if the specified object is also a collection, both
collections have the same size, and both collections contain equal objects
but not necessarily in the same order.
By saying:
ArrayList expectArrayList = ...;
Collection actualCollectionPossiblyFoo = ...
Assert.assertEquals(expectedArrayList, actualCollectionPossiblyFoo)
you are saying you expect something equivalent to an ArrayList (according to ArrayList/AbstractList's definition of equals). This will fail if
actualCollectionPossiblyFoo is really of class Foo and thus not a List as
required by the ArrayList equals method.
However, this isn't the same as saying:
ArrayList expectedArrayList = ...;
Collection actualCollectionPossiblyFoo = ...;
Assert.assertEquals(actualCollectionPossiblyFoo, expectedArrayList);
because actualCollectionPossbilyFoo may be an instance of Foo and
Foo may consider itself and expectedArrayList to be equal according to
Foo class's equals method.
There is no specific reason. They could have ordered their parameters the other way. BTW, TestNG does it the other way.
For better readability and expressibility, I prefer using fluent assertions with fest-assert

Should we overload the meaning of configuration settings?

Imagine an instance of some lookup of configuration settings called "configuration", used like this:
if(! string.IsNullOrEmpty(configuration["MySetting"])
{
DoSomethingWithTheValue(configuration["MySetting"]);
}
The meaning of the setting is overloaded. It means both "turn this feature on or off" and "here is a specific value to do something with". These can be decomposed into two settings:
if(configuration["UseMySetting"])
{
DoSomethingWithTheValue(configuration["MySetting"]);
}
The second approach seems to make configuration more complicated, but slightly easier to parse, and it separate out the two sorts of behaviour. The first seems much simpler at first but it's not clear what we choose as the default "turn this off" setting. "" might actually a valid value for MySetting.
Is there a general best practice rule for this?
I find the question to be slightly confusing, because it talks about (1) parsing, and (2) using configuration settings, but the code samples are for only the latter. That confusion means that my answer might be irrelevant to what you intended to ask. Anyway...
I suggest an approach that is illustrated by the following pseudo-code API (comments follow afterwards):
class Configuration
{
void parse(String fileName);
boolean exists(String name);
String lookupString(String name);
String lookupString(String name, String defaultValue);
int lookupInt(String name);
int lookupInt(String name, int defaultValue);
float lookupFloat(String name);
float lookupFloat(String name, float defaultValue);
boolean lookupBoolean(String name);
boolean lookupBoolean(String name, boolean defaultValue);
... // more pairs of lookup<Type>() operations for other types
}
The parse() operation parses a configuration file and stores the parsed data in a convenient format, for example, in a map or hash-table. (If you want, parse() can delegate the parsing to a third-party library, for example, a parser for XML, Java Properties, JSON, .ini files or whatever.)
After parsing is complete, your application can invoke the other operations to retrieve/use the configuration settings.
A lookup<Type>() operation retrieves the value of the specified name and parses it into the specified type (and throws an exception if the parsing fails). There are two overloadings for each lookup<Type>() operation. The version with one parameter throws an exception if the specified variable does not exist. The version with an extra parameter (denoting a default value) returns that default value if the specified variable does not exist.
The exists() operation can be used to test whether a specified name exists in the configuration file.
The above pseudo-code API offers two benefits. First, it provides type-safe access to configuration data (which wasn't a stated requirement in your question, but I think it is important anyway). Second, it enables you to distinguish between "variable is not defined in configuration" and "variable is defined but its value happens to be an empty string".
If you have already committed yourself to a particular configuration syntax, then just implement the above Configuration class as a thin wrapper around a parser for the existing configuration syntax. If you haven't already chosen a configuration syntax and if your project is in C++ or Java, then you might want to look at my Config4* library, which provides a ready-to-use implementation of the above pseudo-code class (with a few extra bells and whistles).