powerline jason messed up after trying to restart using "powerline-daemon --replace" - json

I tried installing powerline for bash linux. It was working fine with all the changes for git etc..
By mistake I typed powerline-daemon --replace and from then I started getting this error,
Expecting ',' delimiter: line 13 column 6 (char 147).
Any clue what could be the issue. I did check all the json files for the , delimiters, wherever I added the git related codes but unable to find any the reason. I tried doing the below command but the results are same.
powerline-daemon k and then powerline-damen q
Any help/suggestion would be really helpful.

Got it resolved.
Missed , while adding new segments.
Also used // for comments and looks like its not possible in the json.

Related

Octave error filename undefined near line 1 column 1

I tried to run a function with octave-GUI.
I first write a function in 'testFunc.rtf'(in WordPad).
function y = testFunc(x)
y = x^2 + x^3
The path to this file is 'C:\Users\username\Desktop'.
Then on octave-GUI, I wrote such code:
cd 'C:\C:\Users\username\Desktop';
testFunc(4);
The result of this was just an error below:
error: 'testFunc' undefined near line 1, column 1
How can I solve this problem?
Just in case someone like me finds this page, and the above suggestion still doesn't help, some advice. This is running on a Win10 machine:
The error text itself means nothing. It's very poor. It just means "oops, something went wrong and we can't tell you why".
In my case, the file existed, and had the correct name. And was in the "current directory". Using the octave command "what" showed it was there.
But try as I might, it always gave me the error above. I tried so many things I saw suggested in other search results, like making sure the function def was at the first line of the file, etc. Even tried changing DOS -> unix line endings (cr/lf to lf). Nothing.
Then it occurred to me that I was trying to run off of a "samba share" from a linux drive. Even though it had read/write/execute privileges, it just would not actually run.
Moved the file(s) to a windows drive and it started working!
Sheesh. Good luck, all.
Avner

JSON Errors when trying to create a transfer_config in Google BigQuery CLI

I am trying to create a transfer job on the windows commandline with
bq mk --transfer_config --data_source=amazon_s3
--target_dataset=Usage --display_name='s3_transfer_installs_global_in_v0_test'
--params='{"data_path_template":"mybucket", "destination_table_name_template":"in_table", "file_format":"CSV", "max_bad_records":"0", "skip_leading_rows":"1", "allow_jagged_rows":"false", "allow_quoted_newlines":"true", "access_key_id":"dfadfadf", "secret_access_key":"sdfsfsdfsdf"}'
but I keep getting variations of the error
Too many positional args, still have ['"allow_quoted_newlines":"true","access_key_id":',...
Output from --apilog was also not enlightening.
My JSON validates, but there might still be escape characters needed maybe?
Any help very much appreciated, have been shuffling around quotation marks and backslashes for two hours now...
I got the same error as you when running your query.
I tried to replace double quotes with single quotes in --params option and it seems to be working. Try the following:
bq mk --transfer_config --data_source=amazon_s3 --target_dataset=Usage --display_name='s3_transfer_installs_global_in_v0_test' --params="{'data_path_template':'mybucket', 'destination_table_name_template':'in_table', 'file_format':'CSV', 'max_bad_records':'0', 'skip_leading_rows':'1', 'allow_jagged_rows':'false', 'allow_quoted_newlines':'true', 'access_key_id':'dfadfadf', 'secret_access_key':'sdfsfsdfsdf'}"
I also tried to run the original command in Windows PowerShell and it worked without any changes.
I think the problem is in Windows cmd...

Golang CSV read : extraneous " in field error

I am using a simple program to read CSV file, somehow I noticed when I created a CSV using EXCEL or windows based computer go library fails to read it. even when I use cat command it only shows me last line on the terminal. It always results in this error extraneous " in field.
I researched somewhat than I found it is somewhat related to carriage return differences between OS.
But I really want to ask how to make a generic csv reader. I tried reading the same csv using pandas and it was reading successfully. But i am not been able to achieve this using my Go code.
Also screen shot of correct csv Is here
Your file clearly shows that you've got an extra quote at the end of the content. While programs like pandas may be fine with that, I assume it's not valid csv so go does return an error.
Quick example of what's wrong with your data: https://play.golang.org/p/KBikSc1nzD
Update: After your update and a little bit of searching, I have to apoligize, the carriage return does matter and seems to be tha main culprit here, Go seems to be ok handling the \r\n windows variant but not the \r one. In that case what you can do is wrap the bytes.Reader into a custom reader that replaces the \r byte with the \n byte.
Here's an example: https://play.golang.org/p/vNjzwAHmtg
Please note, that the example is just that, an example, it's not handling all the possible cases where \r might be a legit byte.

Sphinx indexer «No error» error

I have 25GB TSV file and trying to import it with command:
D:\sphinx\bin>indexer.exe -c D:\sphinx\sphinx.conf products --rotate
It works some time, but then shows error
ERROR: index 'products': source 'products_tsv': read error 'No error' (line=4595827, pos=908, docid=4595827).
But record at line 4595827 have no problems.
I have two questions:
What's usually causes this problem?
Does indexer have any flags for ignoring errors?
Lost a lot of time on checking datafile and found a lot of hidden symbols such as SYM (\U001A), NULL (\0000) and a more of them, which turns Sphinx crazy.
Simply(if «simply» can be said about 25GB file) replaced all SYM to ' and removed others.
We moved forward and faced another issue, but this is another question.
Try to add extra line break after the last line in your .tsv data source, so the last line is empty. In my case it helped. Thanks to #stefobark and his repository stefobark/index_tsv

CSV::MalformedCSVError: Illegal quoting in line 1 with SmarterCSV

I have an issue, when trying to process a csv file, using SmarterCSV.
The error I get is -
CSV::MalformedCSVError: Illegal quoting in line 1
This is where the code I use to process the csv file
SmarterCSV.process(file_path)
I have gone through similar questions. But no where I find a good fit that could help me.
I tried to resolve it using some options of SmarterCSV such as -
:remove_empty_values, :remove_empty_hashes etc. But in vain.
I welcome the suggestions or refactoring to make this work? Thanks all
This is due to illegal Unicode characters inside your file.
You can process file with Unicode characters with
f = File.open(file_path, "r:bom|utf-8"); data = SmarterCSV.process(f); f.close
here data will contain parsed data.
Also refer official documentation on this:https://github.com/tilo/smarter_csv#notes-about-file-encodings