How to set rule in qradar if something does not occur in event payload for some time? - qradar

I would like to set rule, if qradar does not find the string in event payload for one week? How can I do it?
I am looking to list of conditions, but I did not find any suitable condition. I have this:
when the event(s) have not been detected by one or more of there log source types for this many seconds
However I think it is not very suitable for me, because I need to work with payload. Could someone help me how to solve this problem?

One approach to resolve this problem is to use reference sets. A concept for this is explained here. You need two rules and a reference set:
Reference Set
Create a reference set and configure the time to live to the duration when the absence should be detected.
Rule 1 (Tracker Rule)
Set up a rule that triggers on the pattern whose absence you want to detect. In your case a string in the payload. Select "Add to a Reference Set" as rule response. Use the reference set from above.
Rule 2 (Watcher Rule)
Create a second rule which triggers on Event Name (or QID) "Reference Data Expiry". Maybe you need a custom event property for the name of the reference set and/or the expired element too. With this CEP you can test for the expiry of the item added from rule 1.

Related

Zabbix: get triggers excluding some triggers by triggerid

I use Zabbix API method trigger.get to retrieve a list of available triggers. I try to exclude some triggers from the result list by passing their ids into params:
"excludeSearch": "true",
"search": {"triggerid": "37328"}
It doesn't seem to exclude the trigger with the given id. In the manual I read:
search Works only for string and text fields.
I am not sure if it applies to triggerid which is
triggerids string/array
Anyway, is there any other way to get exclusion by triggerid working?
PS. I tried other names for that parameter, i.e. triggerid, triggerids, and experimented with passing value arrays, objects etc.
I'm not aware of a way to exclude triggers by ID in trigger.get. The confusion regarding string/not is understandable - the API documentation you quote that says "string/array" is looking at this from the API perspective, while the "Works only for string and text fields" part talks about the database field types. The trigger ID is a numeric field in the DB, thus it cannot be searched - and filtering for it cannot be reversed either.

Gerrit REST API: cannot use _sortkey to resume a query

I'm using Gerrit REST API to query all changes whose status is "merged". My query is
https://android-review.googlesource.com/changes/?q=status:merged&n=2
where "n=2" limits the size of query results to 2. So I got a JSON object like:
Of course there are more results. According to the REST document:
If the n query parameter is supplied and additional changes exist that match the query beyond the end, the last change object has a _more_changes: true JSON field set. Callers can resume a query with the N query parameter, supplying the last change’s _sortkey field as the value.
So I add the query parameter N with the _sortkey of the last change 100309. The new query is:
https://android-review.googlesource.com/changes/?q=status:merged&n=2&N=002e4203000187d5
With this new query, I was hoping that I'll get another 2 new query results, since I provided the _sortkey as a cursor of my previous search results.
However, it's really weird that this new query returns exactly the same results as the previous query, instead of the next 2 results as I expected. It seems like providing "N=002e4203000187d5" has no effect at all.
Does anybody know why using _sortkey to resume my query doesn't work?
I chatted with one of the developers at Google, and he confirmed that _sortkey has been removed from the newer versions of Gerrit they are running at android-review and gerrit-review. The N= parameter is no longer valid. The documentation will be updated to reflect this.
The alternative is to use &S=x to skip x results, which I tested and works well.
sortkey is deprecated in Gerrit v2.9 -
see the (Gerrit) ReleaseNotes-2.9.txt, under REST API - Changes:
[[sortkey-deprecation]]
Results returned by the [query changes] endpoint are now paginated using offsets instead of sortkeys.
The sortkey and sortkey_prev parameters on the endpoint are deprecated.
The results are now paginated using the --limit (-n) option to limit the number of results, and the -S option to set the start point.
Queries with sortkeys are still supported against old index versions, to enable online reindexing while clients have an older JS version.
See also here -
PSA: Removing the "sortkey" field from the gerrit-on-borg query interface:
...
Our solution is to kill the sortkey field and its related search operators (sortkey_before, sortkey_after, and resume_sortkey).
There are two ways you can achieve similar functionality.
Add "&S=" to your query to skip a fixed number of results.
(Note that this redoes the search so new results may have jumped ahead and
you might process the same change twice.
This is true of the resume_sortkey implementation as well,
so your code should already be able to handle this.)
Use the before/after operators.
Instead of taking the sortkey field from the last returned change and
using it in a resume_sortkey operator, you take the updated field from
the last returned change and use it in a before operator.
(This has slightly different semantics than the sortkey field, which
uses the change number as a tiebreaker when changes have similar updated times.)
...

IndexedDB - boolean index

Is it possible to create an index on a Boolean type field?
Lets say the schema of the records I want to store is:
{
id:1,
name:"Kris",
_dirty:true
}
I created normal not unique index (onupgradeneeded):
...
store.createIndex("dirty","_dirty",{ unique: false })
...
The index is created, but it is empty! - In the index IndexedDB browser there are no records with Boolean values - only Strings, Numbers and Dates or even Arrays.
I am using Chrome 25 canary
I would like to find all records that have _dirty attribute set to true - do I have to modify _dirty to string or int then?
Yes, boolean is not a valid key.
If you must, of course you can resolve to 1 and 0.
But it is for good reason. Indexing boolean value is not informative. In your above case, you can do table scan and filter on-the-fly, rather than index query.
The answer marked as checked is not entirely correct.
You cannot create an index on a property that contains values of the Boolean JavaScript type. That part of the other answer is correct. If you have an object like var obj = {isActive: true};, trying to create an index on obj.isActive will not work and the browser will report an error message.
However, you can easily simulate the desired result. indexedDB does not insert properties that are not present in an object into an index. Therefore, you can define a property to represent true, and not define the property to represent false. When the property exists, the object will appear in the index. When the property does not exist, the object will not appear in the index.
Example
For example, suppose you have an object store of 'obj' objects. Suppose you want to create a boolean-like index on the isActive property of these objects.
Start by creating an index on the isActive property. In the onupgradeneeded callback function, use store.createIndex('isActive','isActive');
To represent 'true' for an object, simply use obj.isActive = 1;. Then add or put the object into the object store. When you want to query for all objects where isActive is set, you simply use db.transaction('store').index('isActive').openCursor();.
To represent false, simply use delete obj.isActive; and then add or or put the object into the object store.
When you query for all objects where isActive is set, these objects that are missing the isActive property (because it was deleted or never set) will not appear when iterating with the cursor.
Voila, a boolean index.
Performance notes
Opening a cursor on an index like was done in the example used here will provide good performance. The difference in performance is not noticeable with small data, but it is extremely noticeable when storing a larger amount of objects. There is no need to adopt some third party library to accomplish 'boolean indices'. This is a mundane and simple feature you can do on your own. You should try to use the native functionality as much as possible.
Boolean properties describe the exclusive state (Active/Inactive), 'On/Off', 'Enabled/Disabled', 'Yes/No'. You can use these value pairs instead of Boolean in JS data model for readability. Also this tactic allow to add other states ('NotSet', for situation if something was not configured in object, etc.)...
I've used 0 and 1 instead of boolean type.

SQLAlchemy override reflected columns dynamically

I'm using SA in a script I'll be using to periodically 'copy' a subset of mysql tables from a 'production' replica to dev/test systems. I had written code to simply reflect the source tables and meta.create_all(destination_engine). Due to the nature of FKs, I now know I need to apply use_alter=True to the ForeignKeys on the tables as I create them so that I won't get CircularDependencyErrors or other problems. I need to assume I dont know how many FK's or their names until I go through the metadata.
I'm new to SA and typically Java programmer (as you will tell :D). I tried to change the use_alter attr. iteratively at first:
tablesd = smeta.tables.items()
for tname, t in tablesd:
for c in t.columns:
for fk in c.foreign_keys:
fk.use_alter = True
smeta.create_all(to_engine)
EDIT: It's important to note that create_all() does NOT throw a CircularDependencyError after I set the use_alter property like I do above. If I remove that code, create_all() does not work. It just doesnt seem to be removing the FKs from the create...
This obviously didn't work. I then read Overriding Reflected Columns in the SA docs, sample being:
mytable = Table('mytable', meta,
Column('id', Integer, primary_key=True), # override reflected 'id' to have primary key
Column('mydata', Unicode(50)), # override reflected 'mydata' to be Unicode, autoload=True)
I'd guess reflecting each table individually then adding use_alter=True in the FK definition would work, but I CANNOT assume the names and values or # of FK's/columns. I read a lot about using DeclarativeBase to do something like this, but I'm not really sure how that would work...
How can I take my arbitrary list of tables, reflect them, then Override the use_alter option on their respective foreign keys? Am I thinking about this the wrong way?
The answer ended up being inside the problem (Imagine that...). Although each ForeignKey object has a use_alter value that can be set, Constraints also have a separate property that can be set (I was not able to find this in the API Documentation. After running it through PyDev's Debugger, I noticed the former were being set, but all the keys that had Constraints associated with them were still False. I set them to true thusly:
for fk in table.foreign_keys:
fk.use_alter=True
fk.constraint.use_alter=True
This seemed to produce the SQL I was looking for and tables were created correctly with no CircularDependencyErrors and metadata.sorted_tables seemed to work fine with no errors. I was actually able to refactor my code and do things the RIGHT way!
For anyone looking to do DB-->DB reflecting with complex FKs using SQLAlchemy, this answer and Tyler Lesmann's article are for you.
*UPDATE: * Using this method has passed a peer review and is now being used as production code. Seems to work well!

Define custom POST method for MyDAC

I have three tables objects, (primary key object_ID) flags (primary key flag_ID) and object_flags (cross-tabel between objects and flags with some extra info).
I have a query returning all flags, and a one or zero if a given object has a certain flag:
SELECT
f.*,
of.*,
of.objectID IS NOT NULL AS object_has_flag,
FROM
flags f
LEFT JOIN object_flags of
ON (f.flag_ID = of.flag_ID) AND (of.object_ID = :objectID);
In the application (which is written in Delphi), all rows are loaded in a component. The user can assign flags by clicking check boxes in a table, modifying the data.
Suppose one line is edited. Depending on the value of object_has_flag, the following things have to be done:
If object_has_flag was true and still is true, an UPDATE should be done on the relevant row in objects_flags.
If object_has_flag was false but is now true, and INSERT should be done
If object_has_flag was true, but is now false, the row should be deleted
It seems that this cannot be done in one query https://stackoverflow.com/questions/7927114/conditional-replace-or-delete-in-one-query.
I'm using MyDAC's TMyQuery as a dataset. I have written separate code that executes the necessary queries to save changes to a row, but how do I couple this to the dataset? What event handler should I use, and how do I tell the TMyQuery that it should refresh instead of post?
EDIT: apparently, it is not completely clear what the problem is. The standard UpdateSQL, DeleteSQL and InsertSQL cannot be used because sometimes after editing a line (not deleting it or inserting a line), an INSERT or DELETE has to be done.
The short answer is, to paraphrase your answer here:
Look up the documentation for "Updating Data with MyDAC Dataset Components" (as of MyDAC 5.80).
Every TCustomDADataSet (such as TMyQuery) descendant has the capability to set update SQL statements using SQLInsert, SQLUpdate and SQLDelete properties.
TMyUpdateSQL is also a promising component for custom update operations.
It seems that the easiest way is to use the BeforePost event, and determine what has to be done using the OldValue and NewValue properties of several fields.