relatively new to Postman, having problem with the following simple scenario - I have a collection of Postman requests that all point to a local IP where I am developing my application. Let's suppose I finished my local development, deployed the application on some other server, and want to repeat the requests I previously created on THAT server. I know that probably one way to do this would be to use variables.
Instead of that, though, I did an export of the collection, and did a manual edit of the exported JSON file, replacing all the old local IP's with the new server IP. Also changed the collection name, and ID to something arbitrary. While the import back to Postman works, and I see the requests, they all have the old IP still hanging there, as if my replace didn't work, or as if Postman somehow caches the requests and thinks that that new collection is the same as the old one. I also tried "Duplicating" a collection and exporting the duplicated one / replacing / importing again - but the behavior seems to be the same.
Did I miss something, or should I approach what I want to do differently?
Thank you.
duh, I am dumb enough to have been substituting the "raw" URL, while right below there were the old values for "host" and "port" that are the ones Postman constructs URL from:
{
"info": {
"_postman_id": "1499274a-07bc-4ed2-87d4-b10d0cef8f8f",
"name": "some-collection-DEVSERVER",
"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json"
},
"item": [
{
"name": "login (success - bad locale)",
"request": {
"method": "POST",
"header": [
{
"key": "Content-Type",
"name": "Content-Type",
"value": "application/json",
"type": "text"
}
],
"body": {
"mode": "raw",
"raw": "{\n\t\"username\" : \"TEST\",\n\t\"password\" : \"123456\",\n\t\"locale\" : \"asd\"\n}"
},
"url": {
"raw": "http://SERVER-IP:SERVER-PORT/new-path/login",
"protocol": "http",
"host": [
"127",
"0",
"0",
"1"
],
"port": "8081",
"path": [
"old-path",
"login"
]
}
},
"response": []
},
...
]
}
So, after suggestion to use variables I ended up creating two Collection variables "base-URL-LOCAL" and "base-URL-SERVER", that play the role of constants, and a third variable "base-url" which e.g. could have the value of {{base-URL-LOCAL}} (both initial and current values have to be updated). In my exported JSON collection, i substituted all "url" elements with something like the following:
"url": {
"raw": "{{base-url}}/login",
"host": [
"{{base-url}}"
],
"path": [
"login"
]
}
That way somebody who gets my collection won't have to have pre-defined environments set up, and will have to edit collection variables, setting e.g. base-url to {{base-URL-SERVER}}
Related
I have an existing configMap with JSON data. The data could be anything that is allowed in JSON format - arrays, objects, strings, integers, etc.
For example:
{
"channels": ["10", "20", "30"],
"settings": "off",
"expiry": 100,
"metadata": {
"name": "test",
"action": "update"
}
}
Now I want to update the configMap with newer data.
The catch is that I don't want to update any of the values, but just to add or remove any fields that have been added or removed in the new data.
The reason for this is that the values are defaults and might have been already updated in the configMap by other pods/services.
So for example, if the new data contains the below JSON data (expiry field removed and some values changed):
{
"channels": ["10", "20", "30", "100", "10000"],
"settings": "on",
"metadata": {
"name": "test",
"action": "delete"
}
}
Then I expect the configMap to be updated to look like this:
{
"channels": ["10", "20", "30"],
"settings": "off",
"metadata": {
"name": "test",
"action": "update"
}
}
so the values stayed as they were, but the 'expiry' field was removed.
I am using ansible to deploy the kubernetes resources, but I am open to other tools/scripts that could help me achieve what I need.
Thanks in advance
This is not supported by Kubernetes. As you said, the data is JSON-encoded, it's a string. ConfigMap (and Secrets) only understand strings, not nested data of any kind. That's why you have to encode it before storage. You'll need to fetch the data, decode it, make your changes, and then encode and update/patch in the API.
I have an ARM template that deploys API's to an API Management instance
Here is an example of one API
{
"properties": {
"authenticationSettings": {
"subscriptionKeyRequired": false
},
"subscriptionKeyParameterNames": {
"header": "Ocp-Apim-Subscription-Key",
"query": "subscription-key"
},
"apiRevision": "1",
"isCurrent": true,
"subscriptionRequired": true,
"displayName": "DDD.CRM.PostLeadRequest",
"serviceUrl": "https://test1/api/FuncCreateLead?code=XXXXXXXXXX",
"path": "CRMAPI/PostLeadRequest",
"protocols": [
"https"
]
},
"name": "[concat(variables('ApimServiceName'), '/mms-crm-postleadrequest')]",
"type": "Microsoft.ApiManagement/service/apis",
"apiVersion": "2019-01-01",
"dependsOn": []
}
When I am deploying this to different environments I would like to be able to substitute the service url depending on the environment. I'm wondering the best approach?
Can I read in a config file or something like that?
At the time of deployment I have a variable that tells me the environment so I can base decisions on that. Just not sure the best way to do it
See about ARM template parameters: https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates#parameters They can be specified in a separate file. So you will have single template, but environment specific parameter files.
We are using Loopback successfully so far, but we want to add query params to our API documentation.
In our swagger.json file, we might have something that looks like =>
{
"swagger": "2.0",
"info": {
"version": "1.0.0",
"title": "poc-discovery"
},
"basePath": "/api",
"paths": {
"/Users/{id}/accessTokens/{fk}": {
"get": {
"tags": [
"User"
],
"summary": "Find a related item by id for accessTokens.",
"operationId": "User.prototype.__findById__accessTokens",
"parameters": [
{
"name": "fk",
"in": "path",
"description": "Foreign key for accessTokens",
"required": true,
"type": "string",
"format": "JSON"
},
{
"name": "id",
"in": "path",
"description": "User id",
"required": true,
"type": "string",
"format": "JSON"
},
{
"name":"searchText",
"in":"query",
"description":"The Product that needs to be fetched",
"required":true,
"type":"string"
},
{
"name":"ctrCode",
"in":"query",
"description":"The Product locale needs to be fetched. Example=en-GB, fr-FR, etc.",
"required":true,
"type":"string"
},
],
I am 99% certain the swagger.json information gets generated dynamically via information from the .json files in the /server/models directory.
I am hoping that I can add the query params that we accept for each model in those .json files. What I want to avoid is having to modify swagger.json directly.
What is the best approach to add our query params so that they show up in our docs? Very confused as to how to best approach this.
After a few hours of tinkering, I'm afraid there is not a straight forward way to achieve this as the swagger spec generated here is representation of remoting metadata for model methods along with Model data from model.json files.
Thus, updating remoting metadata for built-in model methods would be challenging and it might not be fully supported by method implementations.
Right approach, IMO, here is to:
- create a remoteMethod wrapper around built-in method for which you want additional params to be injected with requried http mapping data.
- And, disable the REST end-point for the built-in method using
MyModel.disableRemoteMethod(<methodName>, <isStatic>).
I have been playing around with Azure Logic Apps and trying to retrieve a Pocket (ReadItLater) article so that I can create a new task in my preferred Task Manager. I have Two HTTP Connectors (one for Retrieve Operation using Pocket API and another post data to Todoist (my preferred task manager).
I can retrieve the Article and the response looks like (removed a few properties below for easy reading):
{
"statusCode": 200,
"headers": {
"pragma": "no-cache",
"status": "200 OK"
},
"body": {
"status": 1,
"complete": 1,
"list": {
"586327616": {
"item_id": "586327616",
"resolved_id": "586327616",
"given_url": "http://kenwheeler.github.io/slick/?utm_source=hackernewsletter&utm_medium=email&utm_term=design&mc_cid=58c9499fa2&mc_eid=3aaf6c4e47",
"given_title": "slick - the last carousel you'll ever need",
"time_added": "1396652224",
"time_updated": "1405156517",
"resolved_title": "slick",
"resolved_url": "http://kenwheeler.github.io/slick/?utm_source=hackernewsletter&utm_medium=email&utm_term=design&mc_cid=58c9499fa2&mc_eid=3aaf6c4e47",
"excerpt": "Add slick.js before your closing <body> tag, after jQuery (requires jQuery 1.7 +) <script type=\"text/javascript\" src=\"slick/slick.min.",
"word_count": "22"
}
}
}
}
Now I want to parse the above response to retrieve individual article properties (i.e. resolved_title). The issue here is the object under the list "586327616" is dynamic and changes for every article, and I can't seem to parse this as an expression in Logic App. My current action in Logic App looks like:
"postToTodoist": {
"conditions": [
{
"expression": "#equals(outputs('getPocketArticles')['statusCode'], 200)"
},
{
"dependsOn": "getPocketArticles"
}
],
"inputs": {
"body": "#{outputs('getPocketArticles')['body']['list'][0]['resolved_title']}",
"headers": {
"Content-Type": "application/x-www-form-urlencoded"
},
"method": "POST",
"repeat": {},
"uri": "https://todoist.com/API/v6/add_item"
},
"type": "Http"
}
For the expression I have tried converting the response to string, using coalesce and trying to access using an index, but nothing seem to work. In the error, it tells me what that the available property is i.e.:
{"code":"InvalidTemplate","message":"Unable to process template language expressions in action 'postToTodoist' inputs at line '1' and column '11': 'The template language expression 'coalesce(body('getPocketArticles')['list']).resolved_title' cannot be evaluated because property 'resolved_title' doesn't exist, available properties are '586327616'. Please see https://aka.ms/logicexpressions for usage details.'."}
I feel that it is not possible to construct an expression without knowing the name of the property, has anyone done something similar?
In the app we're developing, we create all the JSON at the server side using dinamically generated configs (JSON objects). We use that for stores (and other stuff, like GUIs), with a dinamically generated list of its data fields.
With a JSON like this:
{
"proxy": {
"type": "rest",
"url": "/feature/163",
"timeout": 600000
},
"baseParams": {
"node": "163"
},
"fields": [{"name": "id", "type": "int" },
{"name": "iconCls", "type": "auto"},
{"name": "text","type": "string"
},{ "name": "name", "type": "auto"}
],
"xtype": "jsonstore",
"autoLoad": true,
"autoDestroy": true
}, ...
Ext will gently create an "implicit model" with which I'll be able to work with, load it on forms, save it, delete it, etc.
What I want is to specify through a JSON config not the fields, but the model itself. Is this possible?
Something like:
{
model: {
name: 'MiClass',
extends: 'Ext.data.Model',
"proxy": {
"type": "rest",
"url": "/feature/163",
"timeout": 600000},
etc... }
"autoLoad": true,
"autoDestroy": true
}, ...
That way I would be able to create a whole JSON from the server without having to glue stuff using JS statements on the client side.
Best regards,
I don't see why not. The syntax to create a model class is similar to that of store and components:
Ext.define('MyApp.model.MyClass', {
extend:'Ext.data.Model',
fields:[..]
});
So if you take this apart you could call Ext.define(className,config);
where className is a string and config is a JSON object and both are generated on the server.
There's no way to achieve what I want.
The only way you can do it is by means of defining the fields of the Ext.data.Store and have it to generate the implicit model by using the fields configuration.