understanding cuckoo sandbox json report - json

I have setup cuckoo sandbox and already analyzing some malware
the problem is im having a difficult time trying to understand the json report . could anyone please help me understand the following : UDP, procmemory, dns_servers , http , icmp, domains ,apistats ,processtree
just a brief of what they are please
attached sample picture of the json report
thank you in advance

Well I think the output is pretty much clear if you just run one sample but anyway, if you want to better understand the output, you can check this
paper.
As far as I know, "domains", "DNS", "UDP", "TCP",... show the communications of the sample using these protocols. For example, if a malware tries to connect to a URL, then you will have a DNS query in "DNS" section, an HTTP query in "HTTP" section, a domain name in "domains" section and a "UDP" communication in the "UDP" section (since DNS queries are usually over UDP protocol) all related to that one URL the malware tries to connect.
"apistats" shows the statistics about the API that are called by the sample file.
"procmemory" shows the details about different region of the memory with their size, protection level, start and end address.
I hope it helps.

Related

how can I see which ejabberd messages are, and are not, delivered?

I need to write a server-side app which is able to see which messages have been delivered and which have not.
Messages are sent with a XEP-0184 delivery request element, and the recipients are correctly sending XEP-0184 delivery reponses.
Ideally I'd like to be able to derive this from the Postgresql database, but I can't see anywhere the DR response is recorded.
If the only way I can achieve this is with a custom-module, then any hints about what to hook would be gratefully received.
There is nothing to implement in the server for XEP-0184, in particular, storing or keeping track of those stanzas.
Maybe XEP-0313 MAM could help here?
For your own module that would implement this, I guess you can look at modules that handle/log/sniff/filter stanzas, like mod_message_log, mod_filter, mod_spam:filter, mod_log_chat, mod_pottymouth, and later look at modules that use the ETS or Mnesia storages.

What is the reason why the OneNote APIs won't return all the pages in a notebook?

I am reading around here and I am seeing multiple messages about the /pages endpoint that is not working a expected
It seems that the OneNote APIs (MS Graph or Office365) are not returning all the pages that the user can see. In particular recent pages are not shown as available.
This message is for those of you who work for Microsoft and who keep an eye on this forum. Please if you have any explanation or workaround for this we would like to hear about it.
If this is work in progress we would also like to know when the APIs can be considered stable and reliable enough to consider them OK for production use
Update:
Permissions or scopes
scopes=[
"Notes.Read",
"Notes.Read.All",
"Notes.ReadWrite",
]
This is for a device authorization flow, the device is acting as a Microsoft Online account. The app is registered to Azure as personal app but the enterprise one does the same
The authorization process is described here
What type of app/authentication flow should I select to read my cloud OneNote content using a Python script and a personal Microsoft account?
After that I am using this endpoint to get the notebooks
https://graph.microsoft.com/v1.0/users/user-id/onenote/notebooks
from the returned json I pick the endpoint for the notebook I want to read and I access the endpoint the link stored in notebook['sectionsUrl']. This call returns a sections json
From this I pick the section I want and I access the link stored in section['pagesUrl']
Each call returns the expected info excepting the last one, when I get an arbitrary low number of pages in the section I want to explore. There is nothing wrong with the format of the info, it is just incomplete or not up to date
Not sure if this is related but when I try to access the pages in a section from MS Graph Explored I am seeing the same behavior (not all the pages are reported). This is a shared notebook and I am using the owner account for all the above so it should not be a permission problem
from msal import PublicClientApplication
import requests
endpoint= "https://graph.microsoft.com/v1.0/me/onenote"
authority = "https://login.microsoftonline.com/consumers"
app=PublicClientApplication(client_id=client_id, authority=authority)
flow = app.initiate_device_flow(scopes=scopes)
# there is an interactive part here that I automated using selenium, you
# are supposed to ouse a link to enter a code and then autorize the
# device; code not shown
result = app.acquire_token_by_device_flow(flow)
token= result['access_token']
headers={'Authorization': 'Bearer ' + token}
endpoint= "https://graph.microsoft.com/v1.0/users/c5af8759-4785-4abf-9434-xxxxxxxxx/onenote/notebooks"
notebooks = requests.get(endpoint,headers=headers).json()
for notebook in notebooks['value']:
print(notebook['displayName'])
print(notebook['sectionsUrl'])
print(notebook['sectionGroupsUrl'])
# I pick a certain notebook
section=[section for section in sections if section['displayName']=="Test"][0]
endpoint=notebook['sectionsUrl']
pages=requests.get(endpoint,headers=headers).json()
for page in pages['value']:
print(page['title'])
Update2
If I use this endpoint
https://graph.microsoft.com/v1.0/users/user-id/onenote/sections/section-id/pages
I would expect to get the complete list of pages for that section.
That is not working
After reading again and again the docs I my understanding is that the approach is to
call https://graph.microsoft.com/v1.0/users/user-id/onenote/pages$fiter or search etc etc
I this correct?
Also I vaguely remember there is a way to search for a section and have it expanded so that the search returs the children too.
Am I close to understanding this?
Thank you
MM

Error: The requested URL “[no URL]”, is invalid

Originally posted as a reply to: Error: The requested URL "[no URL]", is invalid
I get this error but only with one specific website (which is my own). This must be linked to the website as it is happening on 3 different machines on 3 different networks (personal comp on personal wifi, phone on 4/3g and work pc on work network) and no other sites. Also, it happens no matter what you put after the domain name, weather its a real page or just '/sdjhlgajhsdfg'.
A reply to the other post said that it looks like somthing to do with akamai. As this is my site, i went to the CPanel and disabled the akamai options (over 24 hours ago). i do not need any kind of caching like this as it is a simple html css site with only a hand full of mostly text pages. The most complicated thing on the site is a downloadable pdf which i have actually just taken down.
The error ref number changes every time you refresh the page.
Reference #9.d7c33b8.1478565760.55ccef1
Reference #9.d7c33b8.1478566986.560a7c3
Reference #9.d7c33b8.1478567000.560b460
Any advice would be very much appreciated.
I finally found some time to contact my webserver provider.
I can see that the domain has been removed from the Akamai server.
However, the CNAME which was pointing to Akamai server was causing the
issue. I have removed the CNAME record.
after about half an hour its back up. theres some display issues with the layout, but at lease its displaying the relevant content and not the error.
When you see Invalid URL error, this indicates that the hostname (domain) is not recognized by the Akamai's network (production or staging).
More info at: https://control.akamai.com/search/kb/11327
Hope this helps.
If there is a reverse proxy in before akamai you may get this error.
Client > Reverse Proxy > Akamai > Your API, will give this error.
Let your reverse proxy strip "Host" header and sent by "Client" and try again.
That worked for me in a setup like this:
Browser > Caddy Server > Akamai > My API
in akamai i had to add a new property manager entry for the new url/cert then activate it in prod.

Azure fails when try to create a MySQL database

When I try to create a MySQL database on Microsoft Azure using pure REST request (PUT) to:
https://management.azure.com/subscriptions/<subscriptionid>/resourceGroups
/resource-<id>/providers/successbricks.cleardb/databases/<my-database>?
api-version=2014-04-01
I am getting this error:
HTTP STATUS CODE 400 Bad Request
Error message: 'Legal terms have not been accepted for this item on
this subscription. To accept legal terms, please go to the Azure
portal (http://go.microsoft.com/fwlink/?LinkId=534873) and configure
programmatic deployment for the Marketplace item or create it there
for the first time'
So I went to Microsoft Azure Portal, and I accepted the legal terms. I tried again, same error. I searched in almost the entire Azure Portal for some configuration about this and I found nothing.
Someone have the same problem?
Thanks.
you should not only accept the terms but follow the procedure of making the programmatic access possible. It should be on the license page.
Programmatic deployment only can be found in Virtual Machines MySQL, not in Data Storage MySQL Database. Try REST request after you enabled programmatic Deployment.
In addition, I successfully created a MySQL database using REST API without reproducing your question, but note that the request body need to be sent as well when using PUT request.
OK guys, found the solution. I don't know why, but if we change the JSON attribute { "plan.name": "Pay-As-You-Go" } to { "plan.name": "Free" } the database is created successfully.
I opened a support ticket to know which are the MySQL available plans. I will update the answer as soon as possible.

Receiving JSON POST requests from HPKP error respondents

I'm experimenting with setting up HPKP (https://scotthelme.co.uk/hpkp-http-public-key-pinning/) on my web server and one of its options is to specify an error reporting URI in the header for clients to send error notices to in the form of a JSON POST request structured as such:
{
"date-time": date-time,
"hostname": hostname,
"port": port,
"effective-expiration-date": expiration-date,
"include-subdomains": include-subdomains,
"noted-hostname": noted-hostname,
"served-certificate-chain": [
pem1, ... pemN
],
"validated-certificate-chain": [
pem1, ... pemN
],
"known-pins": [
known-pin1, ... known-pinN
]
}
My question is how can I set something up within Linux to listen for the JSON POSTs on port 80 (or 443)?
Does anything exist for this already? thanks everyone for your help.
Scott Helme, who's link you included, also runs this service which takes care of it for you:
https://report-uri.io
Alternatively if you want to try it out yourself any web scripting language (cgi via perl, php... etc.) should be able to listen to a post request and dump it out to a log file. Personally I use a NodeJS service, but anything will do. I'm not aware of any scripts people have shared but that's probably because there no need as so simple (listen for post request, print out results).
Also you cannot listen on port 443 on the same domain as the site you are monitoring as the report also uses HPKP so won't be able to connect, since the only time you want to report is when you can't connect! Would work fine in report only mode though.
I know you're only experimenting but I would caution to be very careful with HPKP as its very easy to brick your site with this, and it adds a lot of extra considerations to certificate renewal. Personally I don't think it's that great as the risk it introduces, to me anyway, far out weigh the risk it mitigates for most sites. More thoughts of that from me here: https://www.tunetheweb.com/security/http-security-headers/hpkp/#downsides