Illegal character in authority at index 7: hdfs://localhost:9000 with hadoop - exception

I am trying to connect to hdfs.
Configuration configuration = new Configuration();
configuration.set("fs.default.name", this.hdfsHost);
fs = FileSystem.get(configuration);
hdfsHost is 127.0.0.1:9000.
but get this exception at FileSystem.get();
I have another project running the same code, but works well.
Could anyone give any suggestion?
Thank you very much
the exception track:
Exception in thread "main" java.lang.IllegalArgumentException
at java.net.URI.create(URI.java:842)
at org.apache.hadoop.fs.FileSystem.getDefaultUri(FileSystem.java:103)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
at TransferToHadoop.TransferFiles.<init>(TransferFiles.java:50)
at.TransferToHadoop.ScheduleTransferJobs.getTransferFiles(ScheduleTransferJobs.java:99)
at .TransferToHadoop.ScheduleTransferJobs.main(ScheduleTransferJobs.java:30)
Caused by: java.net.URISyntaxException: Illegal character in authority at index 7: hdfs://localhost:9000
at java.net.URI$Parser.fail(URI.java:2809)
at java.net.URI$Parser.parseAuthority(URI.java:3147)
at java.net.URI$Parser.parseHierarchical(URI.java:3058)
at java.net.URI$Parser.parse(URI.java:3014)
at java.net.URI.<init>(URI.java:578)
at java.net.URI.create(URI.java:840)
... 5 more

Try passing hdfsHost as a qualified url hdfs://127.0.0.1:9000 instead of 127.0.0.1:9000

This can happen if there is trailing space in the core-site.xml against the property value for hdfs name (fs.defaultFS).

Bind the host name with the corresponding ip in both the servers i.e. in both client and server in /etc/hosts file

Related

JSON-over-HTTP LLD fails with name contains invalid character '{'

I'm trying to configure a simple Host LLD from JSON over HTTP source, like this one: https://pastebin.com/raw/YWWxGs7y
It uses Preprocessing step with JSONpath (I can test it with built-in testing tool) and 3 LLD-macros which are JSONPaths, too. I test them with output (Result) JSON from built-in testing tool and https://jsonpath.com/
My LLD fails with multiple errors:
Cannot create host "{#LOCATION_ID}": name contains invalid character '{'.
Cannot create host "{#LOCATION_ID}": name contains invalid character '{'.
Cannot create host "{#LOCATION_ID}": name contains invalid character '{'.
Cannot create host "{#LOCATION_ID}": name contains invalid character '{'.​
...
I guess that LLD-Macro's value remains empty, but I have no idea how to check and solve this
My Template in Yaml https://pastebin.com/raw/bBHuJgEz
PS reposted from zabbix forum
I believe the correct macro could be:
lld_macro_paths:
-
lld_macro: '{#LOCATION_ID}'
path: '$.id'
-
lld_macro: '{#LOCATION_NAME}'
path: '$.name'
-
lld_macro: '{#LOCATION_TYPE}'
path: '$.type'

Snort configuration dead end

I'm in a dead end at the configuration of snort.
In theory a simple problem.
I created a test rule to check if snort runs properly.
Location:\etc\snort\rules\local.rules
Content:
alert icmp any any -> $HOME_NET any (msg:"ICMP on fire"; sid:10000001; rev:001;)
Then I ran on terminal :
sudo snort -T -i enp0s3 -c /etc/snort/snort.conf
Message I receive at the end of the initialization:
"Snort successfully validated the configuration!"
"Snort exiting"
But scrolling up I'm seeing:
Initializing rule chains...
0 Snort rules read
0 detection rules
0 decoder rules
0 preprocessor rules
0 Option Chains linked into 0 Chain Headers
No rules at all!
location is correct in conf file under
/etc/snort/snort.conf
var RULE_PATH /etc/snort/rules
Snort 2.9.17 Build 199
Ubuntu 20.04
Any ideas?Thnnks in advance!
I would recommend supplying the rule path when you execute Snort using the "--rule-path" flag.
The --rule-path flag is not available and not recognized.
As far I understand this variable is just that, a variable that's not used anywhere in the configuration file.
The only way/workaround that I found was include the rule files for ex.
In the snort.conf appending this.
.
.
.
.
include c:\local.rules
Besides that, someone found a way to match content in answer/response?
I mean, let suppose that I want to check if the server has answer with a known content, for ex: success. I've tried with bidirectional operator <> and flow:to_client but nothing has worked.

Why does it shows error while running ConnectedThresholdImageFilter example?

I am trying to run ConnectedThresholdImageFilter example in ITK mentioned "https://itk.org/Doxygen45/html/Segmentation_2ConnectedThresholdImageFilter_8cxx-example.html" here.
But it is showing the following error.
itk::ImageFileWriterException (0x24cb740)
Location: "void itk::ImageFileWriter::Write() [with
TInputImage = itk::Image]" File:
/usr/local/include/ITK-4.13/itkImageFileWriter.hxx Line: 151
Description: Could not create IO object for writing file output
Tried to create one of the following:
BMPImageIO
BioRadImageIO
Bruker2dseqImageIO
GDCMImageIO
GE4ImageIO
GE5ImageIO
GiplImageIO
HDF5ImageIO
JPEGImageIO
LSMImageIO
MINCImageIO
MRCImageIO
MetaImageIO
NiftiImageIO
NrrdImageIO
PNGImageIO
StimulateImageIO
TIFFImageIO
VTKImageIO You probably failed to set a file suffix, or
set the suffix to an unsupported type
I didn't do any changes in code. And I am trying to use dicom image as an input.
It is either
You set the output file name with an unsupported extension.
There is something wrong about how you compiled/linked ITK or how you are linking your example to ITK.

NXLog: Json input to GELF UDP Output

We have a setup where a program logs to a .Json file, in a format that follows the GELF specification.
Currently this is sent to a Graylog2 server using HTTP. This works, but due to the nature of HTTP there's a significant latency, which is an issue if there is a large amount of log messages.
I want to change the HTTP delivery method to UDP, in order to just 'fire and forget'.
The logs are written to files like this:
{ "short_message": "<message>", "host": "<host>", "full_message": "<message>", "_extraField1": "<value>", "_extraField2": "<value>", "_extraField3": "<value>" }
Current configuration is this:
<Extension json>
Module xm_json
</Extension>
<Input jsonLogs>
Module im_file
File '<File Location>'
PollInterval 5
SavePos True
ReadFromLast True
Recursive False
RenameCheck False
CloseWhenIdle True
</Input>
<Output udp>
Module om_udp
Host <IP>
Port <Port>
OutputType GELF_UDP
</Output>
With this setup, part of json log message is added to the "message" field of a GELF message, and sent to the server.
I've tried adding the line `Exec parse_json(), but this will simply result in all fields other than short_message and full_message being excluded.
I'm unsure how to configure this correctly. Even just having the complete log message added to a field is preferable, since I can add an extractor on the server side.
You'd need Exec parse_json() in order for GELF_UDP to generate proper output but it was unclear what the exact issue is with message and full/short_message.
Another option you could try is simply ship the log via om_tcp. In this case you'll not need to use OutputType GELF_TCP since it is already formatted that way.

Config.php -error when logging in

After updating to 2.1.7 I get an error in the backend saying
" Warning: Invalid argument supplied for foreach() in /domains/domaindomain.nl/DEFAULT/src/Config.php on line 641
Warning: Cannot modify header information - headers already sent by (output started at /domains/ityhardy.nl/DEFAULT/src/Config.php:641) in /domains/ityhardy.nl/DEFAULT/src/Users.php on line 287
"
Using SQLite on this site.
Frontend seems to work fine.
I must add that at the time of updating I was making small changes in contenttypes.yml, don't know in which exact order I did what.
Found it: There was a stray “uses” in the wrong field somewhere in contenttypes.yml