Winston logger - custom order of log object data - json

I am using Winston for two different logs in my system. In each log, I would like to show the data that is mutual for both logs first, and only after, to show the data that is unique to each log. At the moment, Winston is rearranging my data alphabetically.
This is how my current logs look like:
{"ip":"::1","level":"info","message":"Users were loaded","method":"GET","mod":false,"timestamp":"2023-01-30 09:07:11","url":"/users"}
{"content":"Asset1","field":["description"],"initialKey":"description1","initialValue":"description1","ip":"::1","level":"info","message":"Some message","mod":true,"newKey":"A new description for Asset1","newValue":"A new description for Asset1","subfield":["description"],"timestamp":"2023-01-30 09:07:40","user":"Admin"}```
I want them to look like this:
{"level":"info","timestamp":"2023-01-30 09:07:11","ip":"::1","message":"Users were loaded","mod":false,"method":"GET","url":"/users" }
{"level":"info","timestamp":"2023-01-30 09:07:40","ip":"::1","message":"Some message","mod":true,"user":"Admin","content":"Asset1","field":["description"],"subfield":["description"],"initialKey":"description1","initialValue":"description1","newKey":"A new description for Asset1","newValue":"A new description for Asset1"}

Related

How to extract specific parts of several requests' responses in Postman and add it to a csv file?

Objective: Use an API to feed information into a excel file.
I have an .csv file containing over 8k entries. I will run API requests in a row for those 8k entries and would like to save parts of the reply in another .csv file for all entries.
For example:
file1.csv contains Groups IDS( group1, group2, group3, ...).
Every API request using the parameter of Group ID would return a complete response with several entries like the one below:
<entry>
<id>https://example/api/v1/OData/GroupMemberships(GroupId='Group1',MemberId='iYa4qIi86asvbG')</id>
<title type="text">GroupMemberships(GroupId='Group1',MemberId='iYa4qIi86asvbG')</title>
<updated>2022-06-03T13:20:57+00:00</updated>
<author>
<name/>
</author>
<content type="application/xml">
<m:properties>
<d:GroupId m:type="Edm.String">Group1</d:GroupId>
<d:MemberId m:type="Edm.String">iYa4qIi86asvbG</d:MemberId>
<d:MemberType m:type="Edm.String">admin</d:MemberType>
...
...
</m:properties>
...
So in every iteration of the runner, I would like to extract only the information about MemberId and MemberType, and add those to an excel file that will contain all the information. I expect more than 8000 groups and +100k memberids
Every request for one groupID returns several entries containing all memberIDs for this group > save those specific entries to CSV > run next request and repeat.
This CSV file would look like this:
Group ID
Member ID
Member Type
Group1
iYa4qIi86asvbG
Admin
Group1
memberid2
User
Group1
memberId3
User
Group2
memberid1
User
Group2
memberid2
Admin
...
...
...
I am trying to use Postman to run a batch of API requests (using Collection Runner) in order to get the information I need and feed those specific parts of the response into columns of the CSV. I was researching how to achieve this, but the results were not quite what I was expecting (either they wanted me to write the response into Postman variables or they wanted me to use Newman, which looked like would be too difficult for my current knowledge of API requests)
Is there an easy way to perform this with Postman (maybe writing a script in the Pre-request or test?) or should I try a different approach?

Not like on Array field

here is the problem I'm stuck with:
I'm using Rails 4 & MySQL
I've Message which have one sender and one recipient.
I want to be able to archive messages but if sender archive a message, the recipient still can access to the message until he archive it too.
I've serialize a field :
serialize :archived_by, Array
which contains which user archived the message
but I can't figure out how to query with it.
Message.where("archived_by like ?", [1].to_yaml)
works well, returning messages archived by User '1'
Message.where.not("archived_by like ?", [1].to_yaml)
won't work, returning nothing
I would like to find something else than using a classic many to many ...
Thanks!
UPDATE
I finally decided to add 2 fields, one for the sender & one for the recipient to know which archived the message. If someone has the proper way to do this, tell us :)
If you are using postgresql you could query the informations.
As in answer Searching serialized data, using active record described, the downsize of serializer at least under mysql is, that you byepass native db abstraction.

How to generate logs with timestamp in sikuli?

I want to generate logs with time stamp being in place of popups i have mentioned.
Please suggest some steps for generating logs with time stamp.
Code:-
click("1450851018693.png")
wait(2)
click(Pattern("1450851091319.png").targetOffset(1,2))
click(Pattern("1450851555941.png").targetOffset(-201,1))
type("111")
click(Pattern("1450851201892.png").targetOffset(-13,2))
type("121")
wait(1)
if exists ("1450851253342.png"):
popup("start button is enabled")
click("1450851253342.png")
In the above code instead of popups i want the messages to be logged in file with time stamp.
Please Help..
You can use the logging module from Python importing it and getting a logger instance:
import logging
FORMAT='%(asctime)-15s %(message)s'
logging.basicConfig(format=FORMAT)
logger=logging.getLogger('')
Then, use it in you code (per default, level info is not printed to the console, but warning is)
logger.warning('My message')
You should have a log entry in your like:
2016-03-07 13:10:43,151 My message
See Logging Python for description and basic tutorial.

Liferay: how to get ddmContentModel by json-ws

I've managed to get the ddl structure by
http://localhost:8090/api/jsonws/ddmstructure/get-structures/group-id/10184
To get the param groupId I've used:
http://localhost:8090/api/jsonws/group/get-user-group/company-id/10157/user-id/10639
To find the userId:
http://localhost:8090/api/jsonws/user/get-user-by-email-address/company-id/10157/email-address/test%40liferay.com
The companyId is from:
http://localhost:8090/api/jsonws/company/get-company-by-virtual-host/virtual-host/localhost
Please, could you point me to any resources describing the Json web service in more details.
I didn't find any docs the defines the form of the orderByComparator /ddlrecordset/search.
I would like to get the content of the dynamical list. I've found that data is contained in ddmcontent table - corresponding java file is \portal-service\src\com\liferay\portlet\dynamicdatamapping\model\DDMContentModel.java. How to do that?
I would appreciate any help. Thanks.
Take a look at the below articles: The second one uses the skinny JSON provider to get the list of DDL records from a DDL Record Set
https://dev.liferay.com/develop/tutorials/-/knowledge_base/6-2/json-web-services
https://dev.liferay.com/develop/tutorials/-/knowledge_base/6-2/invoking-services-using-skinny-json-provider

Manipulating json string with Jmeter

Im new to Jmeter and web applications at all, I need some help with a json post.
I have an application with a POST request that sends a json string to save the data that was created/changed.
Here an example of the json code that is sent through the POST request
{"ID":0,"Description":"Test 1"}
With that the first user will create a new registry "Test 1", the second user will create another registry with the same description. I´d like to be able to set the ID and Description info as variables to manipulate them. When the post receive ID 0 is to make a new registry, when It receive an specific ID is to update that registry.
I´m trying to simulate some scenarios that user 1 create, user 2 update or user 1 create, user 2 create different. Things like that.
If your aim is to update the ID and the Description # run time,
Keep the ID and the Description in a CSV file. (something like below)
0, Test1
0, Test1
1, Test2
Use "CSV Data Set Config" (under Config element) to read the CSV file. update the file name and enter "ID,Description" in the variable names.
Update the JSON as
{"ID":${ID},"Description":"${Description}"}
This approach will read the data from CSV and update the JSOn # run time and send the request.
first thread / loop will take the first row and second thread will take the second row.
It should work..let me know if anything.