Bitbucket issue tracker in PhpStorm - phpstorm

I want to link Tasks to our Bitbucket server. However, when I try to add the server (Tools > Tasks & Content > Configure servers), it wants me to choose a server type.
Q1. I have no idea what type to select. I haven't found any reference for this question.
Q2. If a type is chosen, it asks for the server url. Do I need just https://bitbucket.org or do I need something more specific?

Tools > Tasks & Content > Configure servers
Add Server type Generic
Tab: General
Server URL: https://api.bitbucket.org/2.0/repositories/*YOUR LOGIN*/*REPO_NAME*
Fill Username & Password
Put a tick in the Use HTTP authentication
Tab: Commit message
{summary} #{id} - When commiting ID will be set tasks
Tab: Server Configuration
Tasks List URL: {serverUrl}/issues?status=new&status=open
Single Task URL: {serverUrl}/issues/{id}
Response Type : JSON
and then fill the data as shown is not the screenshot
(screenshot)

Adding to the accepted answer, please note that version 2.0 of the Bitbucket API uses a different format for parameters noted here: Bitbucket API 2.0: Filter and sort API objects
The Task List URL should be: {serverUrl}/issues?q=%28state+%3D+%22new%22+OR+state+%3D+%22open%22%29

Related

Wazuh active response with VirusTotal is not working

Wanted to integrate with VirusTotal and Yara but it seems like active response doesn't work as expected by following the steps in the link below:
https://documentation.wazuh.com/current/user-manual/capabilities/active-response/ar-use-cases/removing-malware.html
After adding/downloading eicar.com in /root directory, and read ossec.log, I get the following output:
About VirusTotal
I just followed the documentation and it worked well for me in Wazuh Manager 4.3.4 and a Wazuh Agent of the same version.
I got those same messages in /var/ossec/logs/ossec.log of the Wazuh Agent, those appear when the files do not exist or the proper permissions are not assigned, those files were replaced already in 4.2 but still show up in the log, since you are trying to use the script from the documentation then do not worry about those messages.
If you check under /var/ossec/logs/active-responses.log do you get any error?
What version of Wazuh Manager and Wazuh Agent are you using?
About Yara
It shouldn't be related to VirusTotal and probably deserves a different post, there is an issue open here but seems it is working, probably this comment helps you troubleshooting that one
The Active Response module is managed from the Wazuh Manager in /var/ossec/etc/ossec.conf, from here you can enable the response you need to execute using an <active response> configuration block that will use a "command" as a response. For example, if you are going to enable "remove-threat" as an Active Response on any agent that triggers the VirusTotal rule, you should have a <command> block and also an <active-response> block for that particular case, the same goes for any other AR case you may want to use.
<command>
<name>remove-threat</name>
<executable>remove-threat.sh</executable>
<timeout_allowed>no</timeout_allowed>
</command>
<active-response>
<disabled>no</disabled>
<command>remove-threat</command>
<location>local</location>
<rules_id>87105</rules_id>
</active-response>
The Response (script) needs to be present on each agent under /var/ossec/active-response/bin/. If you are only using the "remove-threat" Active Response, you should only have a single <active-response> block on the Manager's configuration file. Each <active-response> block within the Manager's "ossec.conf" must have a matching <command> block that is basically the response (script) the module is going to use. Perhaps you can share with us this configuration file so we can take a look.
Also, the following output from the Manager will be useful to see if the integration with Virustotal is being activated:
cat /var/ossec/logs/ossec.log | grep wazuh-integratord
I hope this helps,
Let us know

Why my Soffid JSON REST Web Services Connector does not update an object in the target system?

I am trying to connect my Soffid 3 server with our custom web application named Schrift. I am using а JSON REST Web Services Connector for this purpose. I added REST Web service plugin and then configured an agent with JSON/XML/SOAP Rest webservice type.
Loading of objects is working fine. My REST connector connects to the web service successfully and gets data of the accounts.
The problem is when I am trying to update some data (for example, I am trying to lock an account), nothing happens. And unfortunately I don't know what should be happening. When should REST connector send updated data to the managed system and in which way? I didn't find any log entries saying that REST connector was trying to update an object on managed system. Maybe I did smth wrong or missed something.
I would appreciate for any help. I can post any conf or log details if you need.
Update#1
(I did some investigation after the first answer)
I checked the agent settings: Read only and Manual account creation are set to no
The account was set to unmanaged type, but I succeeded in changing its type to shared and then to single without getting an error. Now it is set to single
The task queue is empty.
Also I've checked that update method is present and update properties are set correctly. updateParams is not set (it means that all attributes should be sent to the managed system).
But when I change status of the account (from Enable to Disable), nothing happens.
In the console log I can see only these lines
14-Sep-2021 13:26:29.708 INFO [BPM-Scheduler:192.168.7.121:1] com.soffid.iam.bpm.job.JobExecutorThread.run No job to execute
When I manually run the task Analize impact for changes on Schrift, Execution log shows
Changes detected for accounts
=============================
NO CHANGE DETECTED
Changes detected for roles
=============================
NO CHANGE DETECTED
Update#2
After many attempts I made some progress. Now when I make some changes in the account, the task named UpdateAccount baklykov#irf.com.ua#Schrift appears, but runs with an error.
At first it was 415 Unsupported Media Type error as I wrote in comments, but now it looks a little different
Throws exception updating object : Extensible object [type = account]
EmployeeEmail: baklykov#irf.com.ua
IsLockedOut: true (log truncated) ...
caused by Unexpected response, Content-Type: null
Update#3
I found out that soffid's request for updating the object was in improper format (all the parameters were passed in the html request instead of putting them in json body)
After researching I found a method's property called Encoding and set it to application/json value.
Now the parameters are passed in json body (that's what I need), but now the problem is that soffid puts all the parameters in json body, including the key parameter by which the object for updating should be determined. My guess this is the reason why the object in the target system is still not updated.
In other words my application expects a request like this:
https://myapp.mysite.com/api/v1/Soffid/Employees?EmployeeEmail=baklykov%40irf.com.ua :
{"EmployeeLastName":"Baklykov","EmployeeFirstName":"Ivan"}
but Soffid sends this:
https://myapp.mysite.com/api/v1/Soffid/Employees:
{"EmployeeLastName":"Baklykov","EmployeeFirstName":"Ivan","EmployeeEmail":"baklykov#irf.com.ua"}
The system should have created a UpdateAccount task in the task queue. Please, verify:
The task engine is in automatic mode. In read-only or manual mode, no task will be created.
If you are updating an account, check the account is not set as unmanaged. In that case, no tasks is created.
Finally, verify the task queue has not held the task up.
Have you checked the engine mode? Look at Main Menu > Administration > Configure Soffid > Integration engine > Smart engine settings
It should be set to automatic.

CAS 6.2.x MFA Principal Attribute Trigger 'memberOf' Active Directory Not Working

I have CAS 6.2.x running in Kubernetes building the image from this repo. I am passing in the cas.properties file via configmap.I have it wired up against Active Directory and am able to login with the Username/Password. I am now working to enable MFA with the Google Authenticator plugin. I have this working as well if I force the flow globally with the following:
cas.authn.mfa.global-provider-id=mfa-gauth
When I try to use the values described here for Multifactor Authentication: Principal Attribute Trigger it doesn't send me to the MFA flow. These are the settings that I have set:
cas.authn.ldap[0].principalAttributeList=userPrincipalName,cn,givenName,sAMAccountName,memberOf
cas.authn.mfa.global-principal-attribute-name-triggers=memberOf
cas.authn.mfa.global-principal-attribute-value-regex=ForceMfa
When I log in these are the values returned back for memberOf:
memberOf
[CN=Group2,OU=MyOu,DC=subdomain,DC=domain,DC=local, CN=Group1,OU=MyOu,DC=subdomain,DC=domain,DC=local, CN=ForceMfa,OU=MyOu,DC=subdomain,DC=domain,DC=local]
Principal
I used Misagh blog post as a guide.
If I change the trigger and regex to sAMAccountName and my username it then works as expected. Not sure if I need to change the regex format to find the group name or if I just have something else wrong. It just seems like the regex is not finding a match for some reason as the settings seem to be working for me, just not with memberOf.
Thank you
Consider switching this to:
cas.authn.mfa.global-principal-attribute-value-regex=.*ForceMfa.+
Then, attach/review your logs for org.apereo.cas under either DEBUG/TRACE so you can see what's happening.

Custom service/route creation using feathersjs

I have been reading the documentation for last 2 days. I'm new to feathersjs.
First issue: any link related to feathersjs is not accessible. Such as this.
Giving the following error:
This page isn’t working
legacy.docs.feathersjs.com redirected you too many times.
Hence I'm unable to traceback to similar types or any types of previously asked threads.
Second issue: It's a great framework to start with Real-time applications. But not all real time application just require alone DB access, their might be access required to something like Amazon S3, Microsoft Azure etc. In my case it's the same and it's more like problem with setting up routes.
I have executed the following commands:
feathers generate app
feathers generate service (service name: upload, REST, DB: Mongoose)
feathers generate authentication (username and password)
I have the setup with me, ready but how do I add another custom service?
The granularity of the service starts in the following way (Use case only for upload):
Conventional way of doing it >> router.post('/upload', (req, res, next) =>{});
Assume, I'm sending a file using data form, and some extra param like { storage: "s3"} in the req.
Postman --> POST (Only) to /upload ---> Process request (isStorageExistsInRequest?) --> Then perform the actual upload respectively to the specific Storage in Req and log the details in local db as well --> Send Response (Success or Failure)
Another thread on stack overflow where you have answered with this:
app.use('/Category/ExclusiveContents/:categoryId', {
create(data, params) {
// do complex stuff here
params.categoryId // the id of the category
data // -> additional data from the POST request
}
});
The solution can viewed in this way as well, since featherjs supports micro service approach, It would be great to have sub-routes like:
/upload_s3 -- uploads to s3
/upload_azure -- uploads to azure and so on.
/upload -- main route which is exposed to users. User requests, process request, call the respective sub-route. (Authentication and Auth to be included as well)
How to solve these types of problems using existing setup of feathersjs?
1) This is a deployment issue, Netlify is looking into it. The current documentation is not on the legacy domain though, what you are looking for can be found at docs.feathersjs.com/api/databases/querying.html.
2) A custom service can be added by running feathers generate service and choosing the custom service option. The functionality can then be implemented in src/services/<service-name>/<service-name>.class.js according to the service interface. For file uploads, an example on how to customize the parameters for feathers-blob (which is used in the file uploading guide) can be found in this issue.

Configuring Fed-lab.org as Identity Provider

MY AIM : I am creating a Service provider at my local server using opensaml-java latest library from shibboleth.I want a Test IdP.I chose https://fed-lab.org/ . There is no clear procedure for this configuration also
1.I have created Metadata programmatically using opensaml.
I need to check whether my metadata is correct according to its standard schema.How can i check this?
2.I have registered my SP at https://fed-lab.org/ site after logging in.
3.I have downloaded the Identity Provider from https://fed-lab.org/online/identity-provider-metadata/
It has two IDPSSODescriptors.
In that SIngleSignOnServices are
1.https://openidp.feide.no/simplesaml/saml2/idp/SSOService.php and
2.https://fed-lab.org/simplesaml-test/module.php/fedlab/SingleSignOnService.php
I am using HTTP-Redirect binding
I have created the AuthnRequest message first . then did , deflate , base64encoding , URL encoding as per specification of SAML
https://openidp.feide.no/simplesaml/saml2/idp/SSOService.php?SAMLRequest=processedAuthnRequest
I am trying to access this URL , But I am getting nothing Response from the site.
WHere am I wrong ? please Let me help to figure it out.
Can u provide Test IdPs where there is a clear way(documentation) to do the configuration.
There is a very simple Idp at http://stubidp.kentor.se that doesn't require any kind of registration. Just enter your acs url and a subject nameid to send an unsolited Saml2Response.
It won't let you test everything (yet), but it can get you started on receiving a basic message and handling that.