I am working on teaching myself more about TLS and Wireshark, and seem to be stuck.
I am currently capturing the SSL keys for client/server communications while I am copying a file from my client to my own OneDrive session. I am able to decrypt the TLS stream in Wireshark, and I can see the entire transmission of my file to the host, but I cannot export the file object from that stream. It appears to be inside json, and I don't know a thing about json.
Essentially what I am doing is looking at the characteristics of Data Exfiltration and Data Loss Prevention.
Thanks for your help.
I have a basic NIFI flow with one GenerateFlowFile processor and one LookupRecord processor with RestLookupService.
I need to do a rest lookup and enrich the incoming flow file with rest response.
I am getting errors that the lookup service is unable to lookup coordinates with the value I am extracting from the incoming flow file.
GenerateFlowFile is configured with simple JSON
LookupRecord is configured to extract the key from the JSON and populate it to the RestLookupService. Also, JsonReader and JsonSetWriter is configured to read the incoming flow file and to write the response back to the flow file
The RestLookupService itself exits with JsonParseException about unexpected character '<'
RestLookupService is configured with my API running in the cloud in which I am trying to use the extracted KEY from the incoming flow file.
The most interesting bug is that when I configure the URL to point for example to mocky.io everything works correctly so it means that the issue itself is tight with the API URL I am using (http://iotosk.ddns.net:3006/devices/${key}/getParsingData). I have tried also removing the $key, using the exact URL, using different URLs..
Of course the API is working OK over postman/curl anything else. I have checked the logs on the container that the API is running on and there is no requests in the logs what means that nifi is failing even before reaching the API. At least on application level...
I am absolutely out of options without any clue how to solve this. And with nifi also google is not helpful.
Does anybody see any issue in the configuration or can point me in some direction what can cause this issue?
After some additional digging. The issue was connected with authentication logic even
before the API receives it and that criples the request and and returned XML as Bryan Bende suggested in the comment.
But definitely better logging in nifi will help to solve this way faster...
I'm having an issue with the s3 sink connector. I set my flush-size to 3 (for tests) and my s3 is receiving properly the json file. But when I open the json, I don't have a list of jsons, I only have one after other. Is there any way to get "properly" the jsons in a list when they are sent to my bucket? I want to try a "good way" to solve that, else I'll fix this in a lambda function (but I wouldn't like to do it...)
What I have:
{"before":null,"after":{"id":10230,"nome":"John","idade":30,"cidade":"São Paulo","estado":"SP","sexo":"M"}
{"before":null,"after":{"id":10231,"nome":"Alan","idade":30,"cidade":"São Paulo","estado":"SP","sexo":"M"}
{"before":null,"after":{"id":10232,"nome":"Rodrigo","idade":30,"cidade":"São Paulo","estado":"SP","sexo":"M"}
What I want
[{"before":null,"after":{"id":10230,"nome":"John","idade":30,"cidade":"São Paulo","estado":"SP","sexo":"M"},
{"before":null,"after":{"id":10231,"nome":"Alan","idade":30,"cidade":"São Paulo","estado":"SP","sexo":"M"},
{"before":null,"after":{"id":10232,"nome":"Rodrigo","idade":30,"cidade":"São Paulo","estado":"SP","sexo":"M"}]
The S3 sink connector sends each message to S3 as its own message.
You're wanting to do something different, which is to batch messages together into discrete array objects.
To do this you'll need some kind of stream processing. For example, you could write a Kafka Streams processor that would process the topic and merge each batch of x messages into one message holding an array as you want.
Not clear how you expect to read these files other than manually, but most analytical tools that read S3 buckets (Hive, Athena, Spark, Presto, etc), all expect JSONLines
I'm working on a simple ruby script with cli that will allow me to browse certain statistics inside the terminal.
I'm using API from the following website: https://worldcup.sfg.io/matches
require 'httparty'
url = "https://worldcup.sfg.io/matches"
response = HTTParty.get(url)
I have to goals in mind. First is to somehow save the JSON response (I'm not using a database) so I can avoid unnecessary requests. Second is to check if the new data is available, and if it is, to override the previously saved response.
What's the best way to go about this?
... with cli ...
So caching in memory is likely not available to you. In this case you can save the response to a file on disk.
Second is to check if the new data is available, and if it is, to override the previously saved response.
The thing is, how can you check if new data is available without doing a request for the data? Not possible (given the information you provided). So you can simply keep fetching data every 5 minutes or so and updating your local file.
I am using Logback socket appender, and everything is ok, I can get log from socket.
My scenario is: we have a distributed app, all logs will be saved into to a log server's log file with SocketAppender. I just use SimpleSocketServer provided in Logback to get log from all apps. And the logs can be got and saved.
But, the only problem is, for socket appender, no encoder can be added, and the log message will be formatted maybe in some default format. But I must save them in some format.
A way I can find is to write a log server like SimpleSocketServer, and the log server will get the serialized object (ILoggingEvent), and format the object myself.
But in this way, I need to write too many codes. I think there should be one convenient way to add a encoder.
I don't think you need to worry about the serialized version. You will give the SocketAppender on the various clients String messages.
Then, so long as you configure the SimpleSocketServer to use the your desired Encoder in its configuration, all your messages should be in the correct format on disk.