Artifactory Create Repository Rest API does not work - json

I have Artifactory pro license, and as the following pages provide, I called rest api.
https://www.jfrog.com/confluence/display/RTF/Artifactory+REST+API#ArtifactoryRESTAPI-CreateRepository
I have verified that all other APIs such as repository listing, account creation and listing works normally, but I have confirmed that the repository creation api does not work with 400 errors.
I wanted to see the error by changing the log level, but there was no information about why there was a 400 error at the trace log level.
Below are related logs:
2018-06-15 10:31:34,028 [http-nio-8081-exec-15] [TRACE] (o.a.a.d.r.DockerV2AuthenticationFilter:84) - DockerV2AuthenticationFilter path: /api/repositories/newrepo
2018-06-15 10:31:34,028 [http-nio-8081-exec-15] [DEBUG] (o.a.w.s.a.AuthenticationFilterUtils:105) - Entering ArtifactorySsoAuthenticationFilter.getRemoteUserName
2018-06-15 10:31:34,028 [http-nio-8081-exec-15] [DEBUG] (o.a.w.s.AccessFilter:299) - Cached key has been found for request: '/artifactory/api/repositories/newrepo' with method: 'PUT'
2018-06-15 10:31:34,028 [http-nio-8081-exec-15] [TRACE] (o.a.s.PasswordDecryptingManager:95) - Received authentication request for org.artifactory.security.props.auth.PropsAuthenticationToken#3dc5bccf: Principal: null; Credentials: [PROTECTED]; Authenticated: false; Details: org.springframework.security.web.authentication.WebAuthenticationDetails#b364: RemoteIpAddress: {IP}; SessionId: null; Not granted any authorities
2018-06-15 10:31:34,029 [http-nio-8081-exec-15] [DEBUG] (o.j.a.c.h.AccessHttpClient:109) - Executing : GET http://localhost:8040/access/api/v1/users/?cd=apiKey_shash%3DGprGDe&exactKeyMatch=false
2018-06-15 10:31:34,035 [http-nio-8081-exec-15] [DEBUG] (o.a.w.s.AccessFilter:305) - Header authentication org.artifactory.security.props.auth.PropsAuthenticationToken#c20ca8df: Principal: admin; Credentials: [PROTECTED]; Authenticated: true; Details: org.springframework.security.web.authentication.WebAuthenticationDetails#b364: RemoteIpAddress: {IP}; SessionId: null; Granted Authorities: admin, user found in cache.
2018-06-15 10:31:34,035 [http-nio-8081-exec-15] [DEBUG] (o.a.w.s.RepoFilter :100) - Entering request PUT (10.191.128.129) /api/repositories/newrepo.
2018-06-15 10:31:34,038 [http-nio-8081-exec-15] [DEBUG] (o.a.w.s.RepoFilter :188) - Exiting request PUT (10.191.128.129) /api/repositories/newrepo
Updated
My Artifactory Version: 6.0.2
Reponse Message from Artifactory:
{
"errors" : [ {
"status" : 400,
"message" : "No valid type of repository found.\n"
} ]
}
Repository Create JSON Message*:
{
"key": "newrepo",
"rclass: "local",
"packageType": "docker",
"dockerApiVersion": "V2",
"includesPattern": "**/*",
"excludesPattern": "",
"repoLayoutRef": "simple-default",
"description": "",
"checksumPolicyType": "client-checksums",
"blackedOut": false,
"propertySets": ["artifactory"]
}
The error in this block is on purpose, and the code highlighting finds it quite nicely, but when this post was originally made, highlighting was not available on SO.

In your JSON you are missing " after the rclass.
You wrote ' "rclass: ' and it should be ' "rclass": '
Once fixing this the command should work properly.
Good luck :)
curl -iuadmin:password -X PUT http://localhost:8081/artifactory/api/repositories/newrepo -H "Content-type:application/vnd.org.jfrog.artifactory.repositories.LocalRepositoryConfiguration+json" -T repo_temp.json
HTTP/1.1 100 Continue
HTTP/1.1 200 OK
Server: Artifactory/5.11.0
X-Artifactory-Id: bea9f3f68aa06e62:4db81752:1643a9cff9e:-8000
Content-Type: text/plain
Transfer-Encoding: chunked
Date: Tue, 26 Jun 2018 06:57:24 GMT
Successfully created repository 'newrepo'
repo_temp.json:
{
"key": "newrepo",
"rclass": "local",
"packageType": "docker",
"dockerApiVersion": "V2",
"includesPattern": "**/*",
"excludesPattern": "",
"repoLayoutRef": "simple-default",
"description": "",
"checksumPolicyType": "client-checksums",
"blackedOut": false,
"propertySets": ["artifactory"]
}

This error is (somehow) returned by Artifactory if the content-type header contains the charset, for example: Content-Type: application/json; charset=UTF-8
Try with simply Content-Type: application/json

Related

MacOS regex grep getting value between curly brace

I'm trying to get a curl response between {}. I have found and tested a regex command that works files with Sublime or an online tester.
The problem occurs when I try to execute it with grep from MacOS. I have installed the grep from the brew library, but even though the installation occurred 100%, the command doesn't work. Deleting all break lines of file/response (performing debug), the command works! But in my case, the curl response comes with break lines, so I should be able to handle it.
Could someone tell me why it is occurring with MacOS and how I can solve it?
Curl response:
HTTP/2 401
www-authenticate: Digest realm="MMS Public API", domain="", nonce="8878t9jXCP7+", algorithm=MD5, qop="auth", stale=false
content-type: application/JSON
content-length: 106
x-envoy-upstream-service-time: 3
date: Fri, 13 Jan 2023 17:04:03 GMT
server: envoy
HTTP/2 400
date: Fri, 13 Jan 2023 17:04:04 GMT
strict-transport-security: max-age=31536000; include subdomains;
referrer-policy: strict-origin-when-cross-origin
x-permitted-cross-domain-policies: none
x-content-type-options: nosniff
content-type: application/json
x-frame-options: DENY
content-length: 200
x-envoy-upstream-service-time: 23
server: envoy
{
"detail": "Cluster asdasdasd cannot be created in a paused state.",
"error": 400,
"errorCode": "CANNOT_CREATE_PAUSED_CLUSTER",
"parameters" : [ "asdasdasd" ],
"reason": "Bad Request"
}
I want to get only the following lines:
{
"detail": "Cluster asdasdasd cannot be created in a paused state.",
"error": 400,
"errorCode": "CANNOT_CREATE_PAUSED_CLUSTER",
"parameters" : [ "asdasdasd" ],
"reason": "Bad Request"
}
My regexs:
{([\S\s]+)}
{[^{}]*}
Sublime response:
regextester.com result:
Better use jq.
Your input are plain JSON.
Example to retrieve errorCode:
curl ...... | jq '.errorCode'
jq is like sed for JSON data - you can use it to slice and filter and map and transform structured data with the same ease that sed, awk, grep and friends let you play with text.
Or gron
curl ...... | gron | awk -F' = ' '/^json.errorCode /{print $2}'
To install it:
go install github.com/tomnomnom/gron#latest
I have converted the curl command to python code. It is easier to handle with the response.
import requests
from requests.auth import HTTPDigestAuth
headers = {
'Content-Type': 'application/json',
}
params = {
'pretty': 'true',
}
json_data = {
'autoScaling': {
'diskGBEnabled': True,
},
'backupEnabled': False,
'paused': True,
'name': 'sdasdasd',
'providerSettings': {
'providerName': 'AWS',
'instanceSizeName': 'M10',
'regionName': 'US_EAST_2',
},
}
response = requests.post(
'https://cloud.mongodb.com/api/atlas/v1.0/groups/999999999999/clusters',
params=params,
headers=headers,
json=json_data,
auth=HTTPDigestAuth('user', 'password'),
)
print(response.content)

Ignore snyk code quality issue with .snyk file

Snyk finds some code quality issue that should be ignored. I'm using Snyk CLI:
"snyk code test"
✗ [High] Server-Side Request Forgery (SSRF)
Path: project/src/main/java/com/MyClass.java, line 140
Info: Unsanitized input from an HTTP parameter flows into org.apache.http.client.methods.HttpPost, where it is used as an URL to perform a request. This may result in a Server-Side Request Forgery vulnerability.
That's example.
I know to ignore something I need to put this in .snyk file.
I had trouble doing that so I've put 4 times same thing:
ignore:
'java/Ssrf':
- '*':
reason: None Given
expires: 2023-02-17T14:43:55.203Z
created: 2023-01-18T14:43:55.205Z
'CWE-918':
- '*':
reason: None Given
expires: 2023-02-17T14:43:55.203Z
created: 2023-01-18T14:43:55.205Z
java/Ssrf:
- '*':
reason: None Given
expires: 2023-02-17T14:43:55.203Z
created: 2023-01-18T14:43:55.205Z
CWE-918:
- '*':
reason: None Given
expires: 2023-02-17T14:43:55.203Z
created: 2023-01-18T14:43:55.205Z
But it still throws that problem.
I've added to 'snyk code test' --policy-path=.snyk - no help.
I've tried to use in the id 'Server-Side Request Forgery (SSRF)' <- no success.
All I see is ingoring dependency vulnerabilites in documentation. Is it possible to use that for code check?
I got CWE-918 and 'java/Ssrf' by calling that test to json:
"rules": [
{
"id": "java/Ssrf",
"name": "Ssrf",
"shortDescription": {
"text": "Server-Side Request Forgery (SSRF)"
},
"defaultConfiguration": {
"level": "error"
},
"precision": "very-high",
"repoDatasetSize": 233,
"cwe": [
"CWE-918"
]
}
Is it anyhow possible to do that?

Lambda Edge 502 with custom response from viewer response

I'm using a URL query string to debug my viewer-request and viewer-response lambda#edge functions by returning the event as JSON to the frontend (FYI so I can check for the presence/absence of certain things via an external monitoring tool).
This works fine with the viewer-request: if I go to https://example.org/?debug_viewer_request_event I get a JSON of the viewer-request event:
import json
def lambda_handler(event, context):
request = event["Records"][0]["cf"]["request"]
if "debug_viewer_request_event" in request["querystring"]:
response = {
"status": "200",
"statusDescription": "OK",
"headers": {
"cache-control": [
{
"key": "Cache-Control",
"value": "no-cache"
}
],
"content-type": [
{
"key": "Content-Type",
"value": "application/json"
}
]
},
"body": json.dumps(event)
}
return response
# rest of viewer-request logic...
Testing with cURL:
curl -i https://example.org/?debug_viewer_request_event
HTTP/2 200
content-type: application/json
content-length: 854
server: CloudFront
date: Mon, 26 Apr 2021 06:05:28 GMT
cache-control: no-cache
x-cache: LambdaGeneratedResponse from cloudfront
via: 1.1 xxxxxxxxxxx.cloudfront.net (CloudFront)
x-amz-cf-pop: AMS50-C1
x-amz-cf-id: pU0ItvQA1-r5v3yR1Dl6Z3VpPW_EuuUCHhnOD60uLhng...
{"Records": [{"cf": {"config": {"distributionDomainName": "xxxxxxx.cloudfront.net", "distributionId": "xxxxxxx", "eventType": "viewer-request", "requestId": "pU0ItvQA1-r5v3yR1Dl6Z3VpPW_EuuUCHhnOD60uLhng...
However when I do the same with the viewer-response I get a 502 error:
the code is the same except debug_viewer_request_event is debug_viewer_response_event
if I don't include the debug query string, the response is 200 OK so I know overall both lambdas are working properly (with the exception of the debug for the viewer-response)
Here is the cURL output:
curl -i https://example.org/?debug_viewer_response_event
HTTP/2 502
content-type: text/html
content-length: 1013
server: CloudFront
date: Mon, 26 Apr 2021 06:07:39 GMT
x-cache: LambdaValidationError from cloudfront
via: 1.1 xxxxxxxxx.cloudfront.net (CloudFront)
x-amz-cf-pop: AMS50-C1
x-amz-cf-id: NqXQ-FFEsIX-fEt8IvlHFTYoQdrZSGPScq1H-KNwVWR0-xxxxxx
The Lambda function result failed validation: The function tried to add, delete, or change a read-only header
If I look at the docs, the list of "Read-only Headers for CloudFront Viewer Response Events" is:
Content-Encoding
Content-Length
Transfer-Encoding
Warning
Via
As far as I can see I'm not directly changing any of these headers, but I'm guessing because I'm modifying the response, headers such as Content-Length are modified
Q: Is there a way to return the viewer-response event as JSON to the frontend for debugging or is it simply not possible due to not being able to change Content-Length?
As far as I can see I'm not directly changing any of these headers,
but I'm guessing because I'm modifying the response, headers such as
Content-Length are modified
I agree, I think your issue is that you are returning the response instead of calling
callback(null, response);
where callback should be the third argument to your lambda handler func:
def lambda_handler(event, context, callback):
Since content-length is not mutable, we should assume (and I checked this is true in practice at least for viewer request functions), cloudfront will generate it for you when you generate a response in the edge function.

Ejabberd api set_vcard database_failure

When I use the following command for the ejabberd API I get the following response;
curl -ik -X POST -H 'Authorization: Bearer xxxxxxxxxxx' https://localhost:5280/api/set_vcard -d '{"user":"foo","host":"example.com","name":"FN","content":"foobar"}'
HTTP/1.1 400 Bad Request
Content-Length: 18
Content-Type: application/json
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: Content-Type, Authorization, X-Admin
"database_failure"
On the ejabberd log (level 5) I see this;
[info] (<0.607.0>) Accepted connection ::ffff:172.18.0.1:46622 -> ::ffff:172.18.0.3:5280
[debug] S: [{[<<"ws">>],ejabberd_http_ws},{[<<"bosh">>],mod_bosh},{[<<"oauth">>],ejabberd_oauth},{[<<"api">>],mod_http_api},{[<<"admin">>],ejabberd_web_admin}]
[debug] ({tlssock,#Port<0.18819>,#Ref<0.650175335.3240493057.203147>}) http query: 'POST' <<"/api/set_vcard">>
[debug] client data: <<"{\"user\":\"foo\",\"host\":\"example.com\",\"contents\":[\"FN:foobar\"]}">>
[debug] [<<"api">>,<<"set_vcard">>] matches [<<"api">>]
[info] API call set_vcard [{<<"user">>,<<"foo">>},{<<"host">>,<<"example.com">>},{<<"contents">>,[<<"FN:foobar">>]}] from ::ffff:172.18.0.1:46622
[debug] Command 'set_vcard' execution allowed by rule 'api service' (CallerInfo=#{caller_module => mod_http_api,caller_server => <<"example.com">>,ip => {0,0,0,0,0,65535,44050,1},oauth_scope => [<<"ejabberd:api-service">>],usr => {<<"admin">>,<<"example.com">>,<<>>}})
[debug] Executing command mod_admin_extra:set_vcard with Args=[<<"foo">>,<<"example.com">>,<<>>,<<>>,[<<"FN:foobar">>]]
It is using MySQL as a database (working fine for everything else) however when I watch the database general query log I don't see my API request trigger any queries. I see all the other normal ejabberd queries so there isn't a problem with the db connection and as mentioned earlier everything else works.
$ ejabberdctl status
The node ejabberd#e87da11aa894 is started with status: started
ejabberd 18.4.0 is running in that node
Does anyone have any clues they can throw my way? I've ran out of leads on what could be the issue.
!!! EDIT !!!
Work around
As mentioned in https://github.com/processone/ejabberd/issues/2629 other people have experienced this issue. Changing config to disable the cache and clearing the vcard table in the database seems to be a work around;
SQL:
DELETE FROM vcard;
Config:
...
mod_vcard:
search: false
use_cache: false
...
The API is fairly permissive in what it allows, however once it's in the database the record will fail to load.
For 'set_vcard', the 'name 'is the field name you wish to alter and the content is the contents of that field.
{
"user": "catman",
"host": "the.host",
"name": "FN",
"content": "Cat Man"
}
ejabberd also caches queries, so once you have a barfed record it'll return 'database_failed' even if you've corrected your api call or fixed it in the database by hand. Caching can be disabled under the modules configuration.
Notice in your log that it says:
[debug] client data: <<"{\"user\":\"foo\",\"host\":\"example.com\",\"contents\":[\"FN:foobar\"]}">>
How can it be that contents is FN:foobar? I installed 18.04, setup mysql storage, and running this query:
$ curl -v -H "X-Admin: true" -H "Content-Type:application/json" http://localhost:5280/api/set_vcard -d '{"user":"user1","host":"localhost","name":"FN","content":"mi nombre curllll"}'
The log says:
21:42:29.638 [info] (<0.487.0>) Accepted connection 127.0.0.1:58412 -> 127.0.0.1:5280
21:42:29.638 [debug] S: [{[<<"api">>],mod_http_api},{[<<"bosh">>],mod_bosh},{[<<"oauth">>],ejabberd_oauth},{[<<"presence">>],mod_webpresence},{[<<"register">>],mod_register_web},{[<<"rest">>],mod_rest},{[<<"ws">>],ejabberd_http_ws},{[<<"admin">>],ejabberd_web_admin}]
21:42:29.639 [debug] (#Port<0.18079>) http query: 'POST' <<"/api/set_vcard">>
21:42:29.639 [debug] client data: <<"{\"user\":\"user1\",\"host\":\"localhost\",\"name\":\"FN\",\"content\":\"mi nombre curllll\"}">>
21:42:29.639 [debug] [<<"api">>,<<"set_vcard">>] matches [<<"api">>]
21:42:29.639 [info] API call set_vcard [{<<"user">>,<<"user1">>},{<<"host">>,<<"localhost">>},{<<"name">>,<<"FN">>},{<<"content">>,<<"mi nombre curllll">>}] from 127.0.0.1:58412
21:42:29.640 [debug] Command 'set_vcard' execution allowed by rule 'test commands' (CallerInfo=#{caller_module => mod_http_api,ip => {127,0,0,1}})
21:42:29.640 [debug] Executing command mod_admin_extra:set_vcard with Args=[<<"user1">>,<<"localhost">>,<<"FN">>,<<"mi nombre curllll">>]
21:42:29.640 [debug] SQL: "select vcard from vcard where username='user1' and 0=0"
21:42:29.642 [debug] SQL: "begin;"
21:42:29.642 [debug] SQL: "UPDATE vcard SET vcard='<vCard xmlns=''vcard-temp''><FN>mi nombre curllll</FN><N><FAMILY>mi familia11</FAMILY></N><NICKNAME>mi apodoooooooooooooooooooo11</NICKNAME><PHOTO><BINVAL>R0lGODlhDwAPAJECAP//AAAAAP///wAAACH5BAEAAAIALAAAAAAPAA8AAAIulB2Zx5IA4WIhWnnqvQFJDTyhE4khaG5Wqn4tp4ErFnMY+Sll9naUfGpkFL5DAQA7</BINVAL><TYPE>image/gif</TYPE></PHOTO></vCard>' WHERE username='user1'"
21:42:29.644 [debug] SQL: "UPDATE vcard_search SET username='user1', fn='mi nombre curllll', lfn='mi nombre curllll', family='mi familia11', lfamily='mi familia11', given='', lgiven='', middle='', lmiddle='', nickname='mi apodoooooooooooooooooooo11', lnickname='mi apodoooooooooooooooooooo11', bday='', lbday='', ctry='', lctry='', locality='', llocality='', email='', lemail='', orgname='', lorgname='', orgunit='', lorgunit='' WHERE lusername='user1'"
21:42:29.658 [debug] SQL: "commit;"

Orion / Proton subscription: java.lang.NullPointerException parsing an event from Orion

My Proton instance fails with a java.lang.NullPointerException whenever an event is sent by Orion
this is the Proton log:
proton_1 | 01-Jul-2016 09:46:03.117 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom started event message body reader
proton_1 | 01-Jul-2016 09:46:03.125 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom Event: ApeContextUpdate
proton_1 | 01-Jul-2016 09:46:03.126 SEVERE [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom Could not parse XML NGSI event java.lang.NullPointerException, reason: null
proton_1 | last attribute name: null last value: null
proton_1 | 01-Jul-2016 09:46:03.130 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom finished event message body reader
proton_1 | 01-Jul-2016 09:46:03.131 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.resources.EventResource.submitNewEvent starting submitNewEvent
proton_1 | 01-Jul-2016 09:46:03.132 SEVERE [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.resources.EventResource.submitNewEvent Could not send event, reason: java.lang.NullPointerException, message: null
I've read the Appendix of the User guide and double checked the event name and the attributes list.
This is an xml sent by orion:
POST /ProtonOnWebServer/rest/events HTTP/1.1
User-Agent: orion/0.28.0 libcurl/7.19.7
Host: localhost:8080
Accept: application/xml, application/json
Content-length: 772
Content-type: application/xml
<notifyContextRequest>
<subscriptionId>57762eb9982959644644f9ee</subscriptionId>
<originator>localhost</originator>
<contextResponseList>
<contextElementResponse>
<contextElement>
<entityId type="Ape" isPattern="false">
<id>u1</id>
</entityId>
<contextAttributeList>
<contextAttribute>
<name>carsharing</name>
<type>urn:x-ogc:def:trs:IDAS:1.0:ISO8601</type>
<contextValue>2016-07-01T11:01:06</contextValue>
</contextAttribute>
</contextAttributeList>
</contextElement>
<statusCode>
<code>200</code>
<reasonPhrase>OK</reasonPhrase>
</statusCode>
</contextElementResponse>
</contextResponseList>
</notifyContextRequest>
This is the definition of the Proton project (BTW this is the project copied from
the server filesystem because also the rest api fails with a
NullPointerException)
{
"epn": {
"events": [
{
"name": "ApeContextUpdate",
"createdDate": "Fri Jul 01 2016",
"attributes": [
{
"name": "entityId",
"type": "String",
"dimension": "0"
},
{
"name": "entityType",
"type": "String",
"dimension": "0"
},
{
"name": "carsharing",
"type": "Date",
"dimension": "0"
}
]
}
],
"epas": [],
"contexts": {
"temporal": [],
"segmentation": [],
"composite": []
},
"consumers": [],
"producers": [],
"name": "t0"
}
}
and this is my docker-compose file:
mongo:
image: mongo:2.6
command: --smallfiles --quiet
proton:
image: fiware/proactivetechnologyonline
ports:
- "8080:8080"
orion:
image: fiware/orion:0.28
links:
- mongo
- proton
command: -dbhost mongo --silent
ports:
- "1026:1026"
I'm using Orion 0.28 (the last one that supports XML notifications) and the latest Proton
UPDATE 1 - catalina.log
07-Jul-2016 07:52:39.914 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom started event message body reader
07-Jul-2016 07:52:39.924 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom Event: ApeContextUpdate
07-Jul-2016 07:52:39.924 SEVERE [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom Could not parse XML NGSI event java.lang.NullPointerException, reason: null
last attribute name: null last value: null
07-Jul-2016 07:52:39.928 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom finished event message body reader
07-Jul-2016 07:52:39.929 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.resources.EventResource.submitNewEvent starting submitNewEvent
07-Jul-2016 07:52:39.929 SEVERE [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.resources.EventResource.submitNewEvent Could not send event, reason: java.lang.NullPointerException, message: null
The problem seems to be that your Proton instance is not actually configured with your project's JSON definition file, therefore when sending the POST of any type you will always get NullPointerException since no such event can be found in Proton's metadata.
Please try to configure your instance's admin interface, as described here:
http://proactive-technology-online.readthedocs.io/en/latest/Proton-InstallationAndAdminGuide/index.html (Setup Apache Tomcat for management part)
And then run the following query:
GET //<ip of the machine running Proton>:8080/ProtonOnWebServerAdmin/resources/definitions.
This should return the all the project definitions this instance have...
And then if you see this in the list, you can retrieve your specific project's definition by running:
GET /<ip of the machine running Proton>:8080/resources/definitions/{definition_name}.
I think this will either return nothing, or will be empty.
You can update the definitions by using RESTful interface as described here: http://forge.fiware.org/plugins/mediawiki/wiki/fiware/index.php/Complex_Event_Processing_Open_RESTful_API_Specification (under the Managing Definitions Repository part)