Odoo Multiline Config File - configuration

One of the parameters in the config file has a long value and I want to put the separated value by a comma in the new line
From
addons_path = C:\My\Odoo\addons1, C:\My\Odoo\addons2, C:\My\Odoo\addons3
To
addons_path = C:\My\Odoo\addons1,
C:\My\Odoo\addons2,
C:\My\Odoo\addons3
How can I achieve that?

From the Supported INI File Structure section
Values can also span multiple lines, as long as they are indented deeper than the first line of the value
The following key/value entry should work:
addons_path = C:\My\Odoo\addons1,
C:\My\Odoo\addons2,
C:\My\Odoo\addons3

Related

How do I remove quotes from the column headers in a flat file with SSIS?

I've got a CSV file where all column headers and values are wrapped in quotes ("). In the Flat File Connection Manager Editor I've specified " in the Text qualifier field. That takes care of all the quotes around data values but it doesn't seem to affect the quotes around the column headers. Is there a way to strip the quotes from the column headers as well?
If it's a source, then the column names are unfortunately "Col1". The other option would be to uncheck file has a header row and skip 1 row in the flat file connection manager. Then you'd be able to rename the columns as you desire.
As I think about this, you might be able to manually change the column names in the Flat File Connection Manager to remote the double quotes. And I guess there's also an option to define the column name in the Flat File Source within your data flow so the FFCM would specify it's "Col1" and you can map it to a friendlier name like Column1
You can convert your csv file into a text file and then load that one. After you load the file, in the Text Qualifier put "
this should take care of both the double quotes around your header and your column values.

CSV import (neo4j browser) returning empty nodes only i.e. without properties

I am unable to successfully import a csv file in the neo4j browser, as the nodes are created but they do not show the properties. Does anyone see the problem? I will describe how I proceeded:
This is how the csv file looks
I have tested the csv file with LOAD CSV WITH HEADERS FROM "file:///testCSV3.csv" AS line
WITH line LIMIT 4
RETURN line
and the result is ok (I guess?):
I then tried various things, as e.g. this query:
LOAD CSV WITH HEADERS FROM "file:///testCSV3.csv" AS line
CREATE (:Activity {activityName: line.MyActivity, time: toInteger(line.Timestamp)})
The outcome is nodes without properties:
Any ideas what I am missing? Why are the properties activityName and time not showing up? - Thanks in advance!
(You should have shown your raw CSV file, to make the problem clearer.)
I assume your raw file starts out like this:
ID ;Timestamp;MyActivity
1;1;Run
2;2;Talk
3;3;Eat
LOAD CSV is sensitive to extra spaces, so your ID header should not be followed by a space. Also, the default field terminator is a comma not a semicolon, so you need to specify the FIELDTERMINATOR option to override the default.
Your results would be more reasonable if you removed the extra space and changed your query to this:
LOAD CSV WITH HEADERS FROM "file:///testCSV3.csv" AS line FIELDTERMINATOR ';'
WITH line LIMIT 4
RETURN line

Pentaho Kettle conversion from String to Integer/Number error

I am new to Pentaho Kettle and I am trying to build a simple data transformation (filter, data conversion, etc). But I keep getting errors when reading my CSV data file (whether using CSV File Input or Text File Input).
The error is:
... couldn't convert String to number : non-numeric character found at
position 1 for value [ ]
What does this mean exactly and how do I handle it?
Thank you in advance
I have solved it. The idea is similar to what #nsousa suggested, but I didn't use the Trim option because I tried it and it didn't work on my case.
What I did is specify that if the value is a single space, it is set to null. In the Fields tab of the Text File Input, set the Null if column to space .
That value looks like an empty space. Set the Format of the Integer field to # and set trim to both.

Mysql dump character escaping and CSV read

I am trying to dump out the contents of my mysql query into a csv and read it using some java based open source csv reader. Here are the problems that I face with that,
My data set is having around 50 fields. The data set contains few fields with text having line breaks. Hence to prevent breaking my CSV reader, I gave Fields optionally enclosed by "\"" so that line breaks will be wrapped inside double quotes. In this case, for other fields even if there are no line breaks, it wraps them inside double quotes.
Looks like by default the escape character while doing mysql dump is \ ( backslash) This causes line breaks to appear with \ at the end which breaks the csv parser. To remove this \ at the end, if I give Fields escaped by '' ( empty string), it causes my double quotes in the text not to be escaped, still breaking the csv read.
It would be great if I can skip the line break escaping, but still retain escaping double quotes to cause csv reader not to break.
Any suggestions what can I follow here?
Thanks,
Sriram
Try dumping your data into CSV using uniVocity-parsers. You can then read the result using the same library:
Try this for dumping the data out:
ResultSet resultSet = executeYourQuery();
// To dump the data of our ResultSet, we configure the output format:
CsvWriterSettings writerSettings = new CsvWriterSettings();
writerSettings.getFormat().setLineSeparator("\n");
writerSettings.setHeaderWritingEnabled(true); // if you want want the column names to be printed out.
// Then create a routines object:
CsvRoutines routines = new CsvRoutines(writerSettings);
// The write() method takes care of everything. Both resultSet and output are closed by the routine.
routines.write(resultSet, new File("/path/to/your.csv"), "UTF-8");
And this to read your file:
// creates a CSV parser
CsvParserSettings parserSettings = new CsvParserSettings();
parserSettings.getFormat().setLineSeparator("\n");
parserSettings.setHeaderExtractionEnabled(true); //extract headers from file
CsvParser parser = new CsvParser(parserSettings);
// call beginParsing to read records one by one, iterator-style. Note that there are many ways to read your file, check the documentation.
parser.beginParsing(new File("/path/to/your.csv"), "UTF-8);
String[] row;
while ((row = parser.parseNext()) != null) {
System.out.println(Arrays.toString(row));
}
Hope this helps.
Disclaimer: I'm the author of this library, it's open source and free (Apache V2.0 license)

Meaning of Empty Line in CSV File

At first this seemed obvious, but now I'm not so sure.
If a CSV file has the following line:
a,
I would interpret that as two fields with the values "a" and "". But then looking at an empty line, I could just as easily argue that it signifies one field with the value "".
I accept that an empty line at the end of the file should be interpreted as the end of the file (no field). But does anyone have any information on what an empty line within the file should mean?
Looking at how Excel handles empty lines when reading CSV files, I can see that Excel does not ignore them.
Unfortunately, there is no way to tell if the empty line was treated as an empty field or no fields at all because Excel always has the same number of columns.
I saw some proprietary uses of the CSV format where there was an option to how blank lines should be treated. In the end, this is the approach I took. My CSV reader class has four options for how to deal with empty lines:
Ignore and skip over them
Treat them as a row with zero fields
Treat them as a row with one empty field
Treat them as the end of the input file
If anyone's interested, I will be posting the new source code to replace the existing article at Reading and Writing CSV Files in C#.
Be aware that an empty line might be part of a multiline quoted field:
1,2,"this
is
field number
3",4,5
is valid CSV.
In most CSV files I've seen, the number of fields is constant per row (although that doesn't have to be so), so unless a CSV file only has one column, I would expect empty lines (outside of quoted fields) to be a mistake.
I just checked: Python's CSV parser ignores empty lines. I guess that's reasonable.
To the best of my understanding and experience it stands for missing record and should be ignored. Don't treat it as EOF.
TLDR; After thinking about RFC, I would interpret empty line as a record with one empty value.
In RFC (https://datatracker.ietf.org/doc/html/rfc4180) there is a grammar. The grammar contains, among other things, this:
file = [header CRLF] record *(CRLF record) [CRLF]
...
record = field *(COMMA field)
...
field = (escaped / non-escaped)
non-escaped = *TEXTDATA
Strictly speaking, the grammar does not define the semantics, but anyway, I would interpret it so that a record has at least one field, possibly with empty value.
If I would write a grammar, where there could be a record without fields at all, I would write something different, maybe:
record = *fields CRLF
fields = field *(COMMA field)