QRadar API to retrieve product and vulnerability detail - qradar

Doesn't seem like i can log a question on IBM forums without having a support contract.
For QRadar, I'm using this: https://www.ibm.com/docs/en/qradar-common?topic=endpoints-get-asset-modelassets
which returns:
My questions are: Is it possible to retrieve details from one of the api's?
I need to map this asset, "1278", 327 vulnerabilities, what are they how can i get the titles of the 327 vulnerabilities specific to asset 1278?
A product/ software that is install installed is ID: 48846, variant ID 97265. How can I map that to what it actually is, like "adobe"
From what I've read I don't know if it's possible or if that information is in postgres database which is in-accessible from the API.
Does anyone know?

Related

How to restrict fields returned by stackexchange api, and turn off paging?

I'd like to have a list of just the current titles for all questions in one of the smaller (less than 10,000 questions) stackexchange site. I tried the interactive utility here: https://api.stackexchange.com/docs/questions and it both reports the result as a json at the bottom, and produces the requesting url at the top. For example:
https://api.stackexchange.com/2.2/questions?order=desc&sort=activity&tagged=apples&site=cooking
returns this JSON in my browser:
{"items":[{"tags":["apples","crumble"],"owner":{ ...
...
...],"has_more":true,"quota_max":300,"quota_remaining":252}
What is quota? It was 10,000 on one search on one site, but suddenly it's only 300 here.
I won't be doing this very often, what I'd like is the quickest way to edit that (or similar of course) url so I can get a list of all of the titles on a small site. I don't understand how to use paging, and I don't need any of the other fields. I don't care if I get them, but I'm thinking if I exclude them I can have more at once.
If I need to script it, python (2.7) is my preferred (only) language.
quota_max is the number of requests your application is allowed per day. 300 is the default for an unregistered application. This used to be mentioned directly on the page describing throttles, but seems to have been removed. Here is historical information describing the default.
To increase this to 10,000, you need to register an application and then authenticate by passing an access token in your script.
To get all titles on a site, you can use a Python library to help:
StackAPI. The answer below will use this library. DISCLAIMER: I wrote this library
Py-StackExchange
SEAPI
StackPy
Assuming you have registered your application and authenticated we can proceed.
First, install StackAPI (documentation):
pip install stackapi
This code will then grab the 10,000 most recent questions (max_pages * page_size) for the site hardwarerecs. Each page costs you one API hit, so the more items per page, the few API calls.
from stackapi import StackAPI
SITE = StackAPI('hardwarerecs')
SITE.page_size = 100
SITE.max_pages = 100
# Filter to only get question title and link
filter = '!BHMIbze0EQ*ved8LyoO6rNjkuLgHPR'
questions = SITE.fetch('questions', filter=filter)
In the questions variable is a dictionary that looks very similar to the API output, except that the library did all the paging for you. Your data is in questions['data'] and, in this case, contains a list of dictionaries that look like this:
[
...
{u'link': u'http://hardwarerecs.stackexchange.com/questions/29/sound-board-to-replace-a-gl2200-in-a-house-of-worship-foh-setting',
u'title': u'Sound board to replace a GL2200 in a house-of-worship FOH setting?'},
{ u'link': u'http://hardwarerecs.stackexchange.com/questions/31/passive-gps-tracker-logger',
u'title': u'Passive GPS tracker/logger'}
...
]
This result set is limited to only the title and the link because of the filter we applied. You can find the appropriate filter by adjusting what fields you want in the web UI and copying the filter field.
The hardwarerecs parameter that is passed when creating the SITE parameter is the first part of the site's domain URL. Alternatively, you can find it by looking at the api_site_parameter for your site when looking at the /sites end point.

Searching gaana database for a specific output

I was surfing gaana.com music website that has also released its developer version api.gaana.com. The documentation of api is here http://developer.gaana.com/resources/meta-data-api/tracks/
I wish to query the database but i am struggling with the syntax and I am unable to follow the documentation guidelines. try and retry got me a Json result but I dont know how to put conditions.
Example, I want to search the database for all tracks where the artist name is "kishor kumar" and the rating/popularity of the track is 10. I tried the below url but it does not satisfy the artist name. Can someone help me how to use this api?
http://api.gaana.com?type=song&subtype=most_popular&token=b2e6d7fbc136547a940516e9b77e5990&format=JSON&order=alltime&language=hindi
In the Search API(Search Song) you can see,
APIURL/?type=search&subtype=search_song&key=disco deewane
Just replace disco deewane with kishore kumar.
For example, http://api.gaana.com/?type=search&subtype=search_song&key=kishore%20kumar&token=b2e6d7fbc136547a940516e9b77e5990&format=JSON&order=alltime&language=hindi
There are 6486 tracks listed.

Activiti engine intergation with custom user & group data table

My company has their own database and it contains user and group tables.I am creating a workflow manager using Activiti API also i am using Activiti-REST. I need to fetch user data and group data from my company database instead of using ACT_ID _USER and ACT_ID_GROUP. I searched through internet and post in their forum but i didnt get any sensible answers.
In the forum they suggest to use LDAP but i dont have touch LDAP.
I went through activiti source.can i just modify its iBATIS mapping files related to ACT_ID _USER.Will it work. Or their any better approach. Also activiti-rest api must work according to our own tables.
Please can some one show some good references regarding to this.
you have to implement the interface org.activiti.engine.impl.interceptor.SessionFactory and return the type of org.activiti.engine.impl.interceptor.Session appropiate (org.activiti.engine.impl.persistence.entity.UserIdentityManager.class or org.activiti.engine.impl.persistence.entity.GroupIdentityManager.class), then you have to create your own User/Group Manager (usually extending the org.activiti.engine.impl.persistence.entity.UserEntityManager or org.activiti.engine.impl.persistence.entity.GroupEntityManager).
Finally you have to register your Custom Session Factories on your processEngineConfiguration, for more info (a little outdated because in 5.13 the session types changed) is available on this blog post

The prefix "atom" for element "atom:cc" is not bound exception

I am trying to fetch the contacts of the user who have an account in google apps marketplace. While fetching the contact i get the following error
com.google.gdata.util.ParseException: The prefix "atom" for element "atom:cc" is not bound.
at com.google.gdata.util.XmlParser.parse(XmlParser.java:695)|
at com.google.gdata.util.XmlParser.parse(XmlParser.java:568)|
at com.google.gdata.data.BaseFeed.parseAtom(BaseFeed.java:793)|
at com.google.gdata.wireformats.input.AtomDataParser.parse(AtomDataParser.java:68)|
at com.google.gdata.wireformats.input.AtomDataParser.parse(AtomDataParser.java:39)|
at com.google.gdata.wireformats.input.CharacterParser.parse(CharacterParser.java:)|
at com.google.gdata.wireformats.input.XmlInputParser.parse(XmlInputParser.java:52)|...
I am using Java client library to fetch the contacts. Can you please let me know is there an issue in the java client library? This issue is there for a long time and I badly need to find a solution for this...What should I do to make it work...Any help will be grateful..
Thanks,
VijayRaj
I got the same Problem, that you have with the Java Client, with the .NET client.
After contacting Google support, they told me that the Contacts arbitrary XML data which is in an Property element cannot be parsed within my version of GData .
However, there is a time intensive workaround, by deleting and recreating Contacts, but thats probably not what you are looking for, me either.
After switching to the Python implementation all works fine now.
Check out this Issue report Issue 361

Tweet counter for identi.ca

Is there a way to retrieve the amount of times a certain URL was "dented" (shared on identi.ca, status.net and/or the likes?).
For twitter there are several services that give this information.
Twitter itself: http://urls.api.twitter.com/1/urls/count.json?url=http://example.com&callback=twttr.receiveCount
Tweetmeme: http://api.tweetmeme.com/url_info.jsonc?url=http://example.com
Topsy: http://otter.topsy.com/stats.js?url=http://example.com&callback=?
I don't need the fancy extra information that Tweetmeme or Topsy deliver, only the amount.
I am aware that this is problematic, seen from the "distributed" nature of status.net: it will only give a count from once single silo, e.g. identi.ca. However, for me, for now, that would be enough.
Is there such an endpoint that gives me such JSON?
I don't think so. There's a file table in StatusNet databases that holds references to dented URLs (so it wouldn't be hard to count them if you had access to database or could write a plugin -- i.e., you wouldn't have to parse all notices, just lookup the file table), but it's not exposed through the API.
The list of API possible calls for StatusNet is here: http://status.net/wiki/TwitterCompatibleAPI
In addition, there's a proposed Google Summer of Code project on this subject: Social Analytics plugin