The main issue I am addressing in the small embedded device redesign (PID controller) is the device parameters storage. The old solution I partially present here was space efficient, but clumsy to maintain when new parameters were added. It was based on the device parameter ID that had to match th EEPROM address like in an example given below:
// EEPROM variable addresses
#define EE_CRC 0 // EEPROM CRC-16 value
#define EE_PROCESS_BIAS 1 // FLOAT, -100.00 - 100.00 U
#define EE_SETPOINT_VALUE 3 // FLOAT, -9999 - 9999.9
#define EE_SETPOINT_BIAS 5 // CHAR, -100 - 100 U
#define EE_PID_USED 6 // BYTE, 1 - 3
#define EE_OUTPUT_ACTION 7 // LIST, DIRE/OBRNU
#define EE_OUTPUT_TYPE 8 // LIST, GRIJA/MOTOR
#define EE_PROCESS_BIAS2 9 // FLOAT, -100.00 - 100.00 U
#define EE_SETPOINT_VALUE2 11 // FLOAT, -9999 - 9999.9
#define EE_SETPOINT_BIAS2 13 // CHAR, -100 - 100 U
#define EE_PID_USED2 14 // BYTE, 1 - 3
#define EE_OUTPUT_ACTION2 15 // LIST, DIRE/OBRNU
#define EE_OUTPUT_TYPE2 16 // LIST, GRIJA/MOTOR
#define EE_LINOUT_CALIB_ZERO 17 // FLOAT, -100.0 - 100.0
#define EE_LINOUT_CALIB_GAIN 19 // FLOAT, -2.0 - 2.0
Every address was hardcoded, and the next address was defined depending on the previous data size (note the uneven spacing between addresses). It was efficient as no EEPROM data storage was wasted, but diffcult to expand without introducing bugs.
In other parts of the code (i.e. HMI menus, data storage...) the code would use parameter list matching the addresses just given, something like the following:
// Parameter identification, NEVER USE 0 (zero) as ID since it's NULL
// Sequence is not important, but MUST be same as in setparam structure
#define ID_ENTER_PASSWORD_OPER 1
#define ID_ENTER_PASSWORD_PROGRAM 2
#define ID_ENTER_PASSWORD_CONFIG 3
#define ID_ENTER_PASSWORD_CALIB 4
#define ID_ENTER_PASSWORD_TEST 5
#define ID_ENTER_PASSWORD_TREGU 6
#define ID_PROCESS_BIAS 7
#define ID_SETPOINT_VALUE 8
#define ID_SETPOINT_BIAS 9
#define ID_PID_USED 10
#define ID_OUTPUT_ACTION 11
#define ID_OUTPUT_TYPE 12
#define ID_PROCESS_BIAS2 13
...
Then in code using those parameters, for example in the user menu structrues given below, I have built items using my own PARAM type (structure):
struct param { // Parametar decription
WORD ParamID; // Unique parameter ID, never use zero value
BYTE ParamType; // Parametar type
char Lower[EDITSIZE]; // Lowest value string
char Upper[EDITSIZE]; // Highest value string
char Default[EDITSIZE]; // Default value string
BYTE ParamAddr; // Parametar address (in it's media)
};
typedef struct param PARAM;
Now the list of parameters is built as array of structures:
PARAM code setparam[] = {
{NULL, NULL, NULL, NULL, NULL, NULL}, // ID 0 doesn't exist
{ID_ENTER_PASSWORD_OPER, T_PASS, "0", "9999", "0", NULL},
{ID_ENTER_PASSWORD_PROGRAM, T_PASS, "0", "9999", "0", NULL},
{ID_ENTER_PASSWORD_CONFIG, T_PASS, "0", "9999", "0", NULL},
{ID_ENTER_PASSWORD_CALIB, T_PASS, "0", "9999", "0", NULL},
{ID_ENTER_PASSWORD_TEST, T_PASS, "0", "9999", "0", NULL},
{ID_ENTER_PASSWORD_TREGU, T_PASS, "0", "9999", "0", NULL},
{ID_PROCESS_BIAS, T_FLOAT, "-100.0", "100.0", "0", EE_PROCESS_BIAS},
{ID_SETPOINT_VALUE, T_FLOAT, "-999", "9999", "0.0", EE_SETPOINT_VALUE},
{ID_SETPOINT_BIAS, T_CHAR, "-100", "100", "0", EE_SETPOINT_BIAS},
{ID_PID_USED, T_BYTE, "1", "3", "1", EE_PID_USED},
{ID_OUTPUT_ACTION, T_LIST, "0", "1", "dIrE", EE_OUTPUT_ACTION},
{ID_OUTPUT_TYPE, T_LIST, "0", "1", "GrIJA", EE_OUTPUT_TYPE},
{ID_PROCESS_BIAS2, T_FLOAT, "-100.0", "100.0", "0", EE_PROCESS_BIAS2},
...
In essence, every parameter has it's unique ID, and this ID had to match the hardcoded EEPROM address. Since the parameters were not fixed in size, I could not use the parameter ID itself as an EEPROM (or other media) address. The EEPROM organization in the example above was 16-bit word, but it does not matter in principle (more space is wasted for chars so I would prefer 8-bit organization in the future anyway)
The question:
Is there a more elegant way to do this? Some hash table, well known pattern, standard solution for similar problems? EEPROMS are much larger in size now, and I would not mind using the fixed parameter size (wasting 32 bits for boolean parameter) in exchange for more elegant solution. It looks like with fixed size parameters, I could use the parameter ID as the address. Is there an obvious downside in this method that I do not see?
I am now using the distributed HW (HMI, I/O and main controller are separated), and I would like to use the structure in which all devices know about this parameter structure, so that for example remote I/O knows how to scale input values, and the HMI knows how to display and format data, all based only on the parameter ID. I other words, I need single place where all parameters would be defined.
I did my Google research, very little could be found for small devices not icluding some data bases. I was even thinking about some XML definition which would generate some C code for my data structures, but maybe there was some elegant solution more appropriate for small devices (up to 512 K Flash, 32 K RAM)?
If you are not worried about compatibility across changes or processors, you could simply copy the struct between RAM and EEPROM, and only ever access individual members of the RAM copy.
You could also relatively easily create a tool which would compile a list of defines from the struct and known packing rules of your compiler, if you do want to do explicit access to individual members directly in the EEPROM.
Here is wha I would do.
I would create a typedef of a structure with the variables you whish to have in the EEPROM.
Using your example it would look something like this:
typedef struct eeprom_st
{
float process_biass;
float setpoint_value;
char setpoint_bias;
....
} eeprom_st_t;
Than I would create an offset define to mark where the structure is to be stored in the EEPROM.
And I would add a pointer to that type to use it as a dummy object:
#define EEPROM_OFFSET 0
eeprom_st_t *dummy;
Than I would use offsetof to get the offset of the specific variable I need like this:
eeprom_write( my_setpoint_bias, EEPROM_OFFSET + offsetof(eeprom_st_t,setpoint_bias),
sizeoff(dummy->setpoint_bias));
To make it more elegant I would turn the eeprom write routine into a macro as well.
I’m not sure whether this is actually better than what you have, but here is an idea. For easier maintenance, consider encapsulating the knowledge of the EEPROM addresses into an “eeprom” object. Right now you have a parameter object and each instance knows where its data is stored in physical EEPROM. Perhaps it would be easier to maintain if the parameter object had no knowledge of the EEPROM. And instead a separate eeprom object was responsible for interfacing between the physical EEPROM and parameter object instances.
Also, consider adding a version number for the EEPROM data to the data saved in EEPROM. If the device firmware is updated and the format of the EEPROM data changes, then this version number allows the new firmware to recognize and convert the old version of the EEPROM data.
Related
Given the following JSON input :
{
"hostname": "server1.domain.name\nserver2.domain.name\n*.gtld.net",
"protocol": "TCP",
"port": "8080\n8443\n9500-9510",
"component": "Component1",
"hostingLocation": "DC1"
}
I would like to obtain the following JSON output :
{
"hostname": [
"server1.domain.name",
"server2.domain.name",
"*.gtld.net"
],
"protocol": "TCP",
"port": [
"8080-8080",
"8443-8443",
"9500-9510"
],
"component": "Component1",
"hostingLocation": "DC1"
}
Considering :
That the individual values in the port array may, or may not, be separated by a - character (I have no control over this).
That if an individual value in the port array does not contain the - separator, I then need to add it and then repeat the array value after the - separator. For example, 8080 becomes 8080-8080, 8443 becomes 8443-8443 and so forth.
And finally, that if a value in the port array is already of the format value-value, I should simply leave it unmodified.
I've been banging my head against this filter all afternoon, after reading many examples both here and in the official jq online documentation. I simply can't figure out how to accomodate consideration #3 above.
The filter I have now :
{hostname: .hostname | split("\n"), protocol: .protocol, port: .port | split("\n") | map(select(. | contains("-") | not)+"-"+.), component: .component, hostingLocation: .hostingLocation}
Yields the following output JSON :
{
"hostname": [
"server1.domain.name",
"server2.domain.name",
"*.gtld.net"
],
"protocol": "TCP",
"port": [
"8080-8080",
"8443-8443"
],
"component": "Component1",
"hostingLocation": "DC1"
}
As you can see above, I subsequently lose the 9500-9510 value as it already contains the - string which my filter weeds out.
If my logic does not fail me, I would need to stick an if statement within my select statement to conditionally only send array values that do not contain the string - to my select statement but leave array values that do contain the separator untouched. However, I cannot seem to figure this last piece out.
I will happily accept any alternative filter that yields the desired output, however I am also really keen on understanding where my logics fails in the above filter.
Thanks in advance to anyone spending their valuable time helping me out!
/Joel
First, we split the hostname string by a newline character (.hostname /= "\n") and do the same with the port string (.port /= "\n"). Actually, we can combine these identical operations into one: (.hostname, .port) /= "\n"
Next, for every element of the port array (.port[]) we split by any non-digit character (split("[^\\d]";"g")) resulting in an array of digit-only strings, from which we take the first element (.[0]), then a dash sign, and finally either the second element, if present, otherwise the first one again (.[1]//.[0])
With your input in a file called input.json, the following should convert it into the desired format:
jq '
(.hostname, .port) /= "\n" |
.port[] |= (split("[^\\d]";"g") | "\(.[0])-\(.[1]//.[0])")
' input.json
Regarding your considerations:
As we split at any non-digit character, it makes no difference what other character separates the values of a port range. If more than one character could separate them (e.g. an arrow -> or with spaces before and after the dash sign -), simply replace the regex [^\\d] with [^\\d]+ for capturing more than one non-digit character.
and 3. We always produce a range by including a dash sign and a second value, which depending on the presence of a second item may be either that or the first one again.
Regarding your approach:
Inside map you used select which evaluates to empty if the condition (contains("-") | not) is not met. As "9500-9510" does indeed contain a dash sign, it didn't survive. An if statement inside the select statement wouldn't help because even if select doesn't evaluate to empty it still doesn't modify anything, it just reproduces its input unchanged. Therefore, if select is letting through both cases (containing and not containing dash signs) it becomes useless. You could, however, work with an if statement outside of the select statement, but I considered the above solution as a simpler approach.
I can't figure out how to remove null values (and corresponding keys) from the output JSON.
Using JSON GENERATE, I am creating JSON output and in output I am getting \U0000 as null.
I want to identify and remove null values and keys from the JSON.
Cobol version - 6.1.0
I am generating a file with PIC X(5000). Tried INSPECT and other statements but no luck :(
For example :
{
"Item": "A",
"Price": 12.23,
"Qty": 123
},
{
"Item": "B",
"Price": \u000,
"Qty": 234
},
{
"Item": "C",
"Price": 23.2,
"Qty": \u0000
}
In output I want:
{
"Item": "A",
"Price": 12.23,
"Qty": 123
},
{
"Item": "B",
"Qty": 234
},
{
"Item": "C",
"Price": 23.2,
}
Approach 1:
created JSON using JSON GENERATE command and defined output file in
PIC X(50000).
after converting to UTF-8 trying to use INSPECT to find the '\U000' using hex value but there is no effect on UTF8 arguments and not able to search '\U000' values.
perform varying utf-8-pos from 1 by 1
until utf-8-pos = utf-8-end
EVAUATE TRUE
WHEN JSONOUT(1:utf-8-pos) = X'5C' *> first finding a "\" value in o/p
perform varying utf-8-pos from 1 by 1
until JSONOUT(1:utf-8-pos) = x'22' *> second finding a end position string " as X'22'
move JSONOUT(1: utf-8-end - utf-8-pos) to JSONOUT *> skip the position of null
end-perform
WHEN JSONOUT(1:utf-8-pos) NOT= X'5C'
continue
WHEN OTHER
continue
END-EVALUATE
end-perform
Approach 2:
Convert the item to UTF-16 in a national data item by using NATIONAL-OF function.
using INSPECT, EVALUATE OR PERFORM to find '\U000' using hex value N'005C'.
but not able to find the correct position of '\u000' also tried NX'005C' to find but no luck.
IDENTIFICATION DIVISION.
PROGRAM-ID. JSONTEST.
ENVIRONMENT DIVISION.
CONFIGURATION SECTION.
SOURCE-COMPUTER. IBM-370-158.
DATA DIVISION.
WORKING-STORAGE SECTION.
01 root.
05 header.
10 systemId PIC X(10).
10 timestamp PIC X(30).
05 payload.
10 customerid PIC X(10).
77 Msglength PIC 9(05).
77 utf-8-pos PIC 9(05).
77 utf-8-end PIC 9(05).
01 jsonout PIC X(30000).
PROCEDURE DIVISION.
MAIN SECTION.
MOVE "2012-12-18T12:43:37.464Z" to timestamp
MOVE LOW-VALUES to customerid
JSON GENERATE jsonout
FROM root
COUNT IN Msglength
NAME OF root is OMITTED
systemId IS 'id'
ON EXCEPTION
DISPLAY 'JSON EXCEPTION'
STOP RUN
END-JSON
DISPLAY jsonout (1:Msglength)
PERFORM skipnull.
MAIN-EXIT.
EXIT.
Skipnull SECTION.
perform varying utf-8-pos from 1 by 1
until utf-8-pos = utf-8-end
EVALUATE TRUE
WHEN JSONOUT(1:utf-8-pos) = X'5C' *> first finding a "\" value in o/p
perform varying utf-8-pos from 1 by 1
until JSONOUT(1:utf-8-pos) = x'22' *> second finding a end position string " as X'22'
move JSONOUT(1: utf-8-end - utf-8-pos) to JSONOUT *> skip the position of null
end-perform
WHEN JSONOUT(1:utf-8-pos) NOT= X'5C'
continue
WHEN OTHER
continue
END-EVALUATE
end-perform.
Skipnull-exit.
EXIT.
sample output : As we don't have any value to fill for customer id so in o/p results we are getting :
{"header" : {
"timestamp" : "2012-12-18T12:43:37.464Z",
"customerid" : "\u0000\u0000\u00000" }
}
in result I want to skip customerid from the o/p. I want to skip both object:Name from the o/p file.
Since Enterprise COBOL's JSON GENERATE is an all-in-one-go command I don't think there's an easy way to do this in V6.1.
Just to give you something to look forward to: Enterprise-COBOL 6.3 offers an extended SUPPRESS-clause that does just what you need:
JSON GENERATE JSONOUT FROM MYDATA COUNT JSONLEN
SUPPRESS WHEN ZERO
ON EXCEPTION DISPLAY 'ERROR JSON-CODE: ' JSON-CODE
NOT ON EXCEPTION DISPLAY 'JSON GENERATED - LEN=' JSONLEN
You can also suppress WHEN SPACES, WHEN LOW-VALUE or WHEN HIGH-VALUE.
You can also limit suppression to certain fields:
SUPPRESS Price WHEN ZERO
Qty WHEN ZERO
Unfortunately this feature hasn't been backported to 6.1 yet (it's been added to 6.2 with the December 2020 PTF) and I don't know whether it will be...
I don't know anything about cobol but I was need the same thing in javascript, so I will share my javascript function for you. If you can translate this to cobol maybe it will help to you.
function clearMyJson(obj) {
for (var i in obj) {
if ($.isArray(obj[i]))
if (obj[i].length == 0) //remove empty arrays
delete obj[i];
else
clearMyJson(obj[i]); //calling function for clear the array
else if ($.isPlainObject(obj[i]))
if ((obj[i] == null || obj[i] == "")) // delete property if its null or empty
delete obj[i];
else
clearMyJson(obj[i]); //calling function for clear the object
}
}
I need to compare duplicates ip of a json by date field and remove the older date
Ex:
[
{
"IP": "10.0.0.20",
"Date": "2019-09-14T20:00:11.543-03:00"
},
{
"IP": "10.0.0.10",
"Date": "2019-09-17T15:45:16.943-03:00"
},
{
"IP": "10.0.0.10",
"Date": "2019-09-18T15:45:16.943-03:00"
}
]
The output of operation need to be like this:
[
{
"IP": "10.0.0.20",
"Date": "2019-09-14T20:00:11.543-03:00"
},
{
"IP": "10.0.0.10",
"Date": "2019-09-18T15:45:16.943-03:00"
}
]
For simplicity's sake, I'll assume the order of the data doesn't matter.
First, if your data isn't already in Python, you can use json.load or json.loads to convert it into a Python object, following the straightforward type mappings.
Then you problem has three parts: comparing date strings as dates, finding the maximum element of a list by that date, and performing this process for each distinct IP address. For these purposes, you can use two of Pyhton's built-in methods and two from the standard library.
Python's built-in max and sorted functions (as well as list.sort) support a (keyword-only) key argument, which uses a function to determine the value to compare by. For example, max(d1, d2, key=lambda x: x[0]) compares the data by the first index of the each (like d1[0] < d2[0]), and returns whichever of d1 and d2 produced the larger key.
To allow that type of comparison between dates, you can use the datetime.datetime class. If your dates are all in the format specified by datetime.datetime.fromisoformat, you can use that function to turn your date strings into datetimes, which can then be compared to each other. Using that in a function that extracts the dates from the dictionaries gives you the key function you need.
def extract_date(item):
return datetime.datetime.fromisoformat(item['Date'])
Those functions allow you to choose the object from the list with the largest date, but not to keep separate values for different IP addresses. To do that, you can use itertools.groupby, which takes a key function and puts the elements of the input into separate outputs based on that key. However, there are two things you might need to watch out for with groupby:
It only groups elements that are next to each other. For example, if you give it [3, 3, 2, 2, 3], it will group two 3s, then two 2s, then one 3 rather than grouping all three 3 together.
It returns an iterator of key, iterator pairs, so you have to collect the results yourself. The best way to do that may depend on your application, but a basic approach is nested iterations:
for key, values in groupby(data, key_function):
for value in values:
print(key, value)
With the functions I've mentioned above, it should be relatively straightforward to assemble an answer to your problem.
Suppose my JSON is like this.
// {
// "count": 32,
// "weight": 1.13,
// "name": "grape",
// "isFruit": true
// "currentPrice" : "30.00"
// }
If I read my JSON like this,
String current = json.getString("currentPrice");
the current variable will have value as "30.00". Is there any way that I can parse this as an Integer? I tried doing Integer.parseInt but It is giving an error like Number format exception for input string "30.00".
I tried removing quotes by applying regex but didn't work.
You need to use
parseInt('current')
parseInt(num); // default way (no radix)
parseInt(num, 10); // parseInt with radix (decimal)
parseFloat(num) // floating point
Number(num); // Number constructor
to get current
You want parseFloat(). 30.00 isn't an integer, even though it's numerically EQUAL to the integer 30.
If you want it as an integer, you can use Math.floor() to convert it to one, or you can use parseInt() to get the integer portion, but if you really want the whole value (if it might not always be whole), parse it as a float.
All my university notes are in JSON format and when I get a set of practical questions from a pdf it is formatted like this:
1. Download and compile the code. Run the example to get an understanding of how it works. (Note that both
threads write to the standard output, and so there is some mixing up of the two conceptual streams, but this
is an interface issue, not of concern in this course.)
2. Explore the classes SumTask and StringTask as well as the abstract class Task.
3. Modify StringTask.java so that it also writes out “Executing a StringTask task” when the execute() method is
called.
4. Create a new subclass of Task called ProdTask that prints out the product of a small array of int. (You will have
to add another option in TaskGenerationThread.java to allow the user to generate a ProdTask for the queue.)
Note: you might notice strange behaviour with a naïve implementation of this and an array of int that is larger
than 7 items with numbers varying between 0 (inclusive) and 20 (exclusive); see ProdTask.java in the answer
for a discussion.
5. Play with the behaviour of the processing thread so that it polls more frequently and a larger number of times,
but “pop()”s off only the first task in the queue and executes it.
6. Remove the “taskType” member variable definition from the abstract Task class. Then add statements such as
the following to the SumTask class definition:
private static final String taskType = "SumTask";
Investigate what “static” and “final” mean.
7. More challenging: write an interface and modify the SumTask, StringTask and ProdTask classes so that they
implement this interface. Here’s an example interface:
What I would like to do is copy it into vim and execute a find and replace to convert it into this:
"1": {
"Task": "Download and compile the code. Run the example to get an understanding of how it works. (Note that both threads write to the standard output, and so there is some mixing up of the two conceptual streams, but this is an interface issue, not of concern in this course.)",
"Solution": ""
},
"2": {
"Task": "Explore the classes SumTask and StringTask as well as the abstract class Task.",
"Solution": ""
},
"3": {
"Task": "Modify StringTask.java so that it also writes out “Executing a StringTask task” when the execute() method is called.",
"Solution": ""
},
"4": {
"Task": "Create a new subclass of Task called ProdTask that prints out the product of a small array of int. (You will have to add another option in TaskGenerationThread.java to allow the user to generate a ProdTask for the queue.) Note: you might notice strange behaviour with a naïve implementation of this and an array of int that is larger than 7 items with numbers varying between 0 (inclusive) and 20 (exclusive); see ProdTask.java in the answer for a discussion.",
"Solution": ""
},
"5": {
"Task": "Play with the behaviour of the processing thread so that it polls more frequently and a larger number of times, but “pop()”s off only the first task in the queue and executes it.",
"Solution": ""
},
"6": {
"Task": "Remove the “taskType” member variable definition from the abstract Task class. Then add statements such as the following to the SumTask class definition: private static final String taskType = 'SumTask'; Investigate what “static” and “final” mean.",
"Solution": ""
},
"7": {
"Task": "More challenging: write an interface and modify the SumTask, StringTask and ProdTask classes so that they implement this interface. Here’s an example interface:",
"Solution": ""
}
After trying to figure this out during the practical (instead of actually doing the practical) this is the closest I got:
%s/\([1-9][1-9]*\)\. \(\_.\{-}\)--end--/"\1": {\r "Task": "\2",\r"Solution": "" \r},/g
The 3 problems with this are
I have to add --end-- to the end of each question. I would like it to know when the question ends by looking ahead to a line which starts with [1-9][1-9]*. unfortunately when I search for that It also replaces that part.
This keeps all the new lines within the question (which is invalid in JSON). I would like it to remove the new lines.
The last entry should not contain a "," after the input because that would also be invalid JSON (Note I don't mind this very much as it is easy to remove the last "," manually)
Please keep in mind I am very bad at regular expressions and one of the reasons I am doing this is to learn more about regex so please explain any regex you post as a solution.
In two steps:
%s/\n/\ /g
to solve problem 2, and then:
%s/\([1-9][1-9]*\)\. \(\_.\{-}\([1-9][1-9]*\. \|\%$\)\#=\)/"\1": {\r "Task": "\2",\r"Solution": "" \r},\r/g
to solve problem 1.
You can solve problem 3 with another replace round. Also, my solution inserts an unwanted extra space at the end of the task entries. Try to remove it yourself.
Short explanation of what I have added:
\|: or;
\%$: end of file;
\#=: find but don't include in match.
If each item sits in single line, I would transform the text with macro, it is shorter and more straightforward than the :s:
I"<esc>f.s": {<enter>"Task": "<esc>A"<enter>"Solution": ""<enter>},<esc>+
Record this macro in a register, like q, then you can just replay it like 100#q to do the transformation.
Note that
the result will leave a comma , and the end, just remove it.
You can also add indentations during your macro recording, then your json will be "pretty printed". Or you can make it sexy later with other tool.
You could probably do this with one large regular expression, but that quickly becomes unmaintainable. I would break the task up into 3 steps instead:
Separate each numbered step into its own paragraph .
Put each paragraph on its own line .
Generate the JSON .
Taken together:
%s/^[0-9]\+\./\r&/
%s/\(\S\)\n\(\S\)/\1 \2/
%s/^\([0-9]\+\)\. *\(.*\)/"\1": {\r "Task": "\2",\r "Solution": ""\r},/
This solution also leaves a comma after the last element. This can be removed with:
$s/,//
Explanation
%s/^[0-9]\+\./\r&/ this matches a line starting with a number followed by a dot, e.g. 1., 8., 13., 131, etc. and replaces it with a newline (\r) followed by the match (&).
%s/\(\S\)\n\(\S\)/\1 \2/ this removes any newline that is flanked by non-white-space characters (\S).
%s/^\([0-9]\+\)\. *\(.*\) ... capture the number and text in \1 and \2.
... /"\1": {\r "Task": "\2",\r "Solution": ""\r},/ format text appropriately.
Alternative way using sed, awk and jq
You can perform steps one and two from above straightforwardly with sed and awk:
sed 's/^[0-9]\+\./\n&/' infile
awk '$1=$1; { print "\n" }' RS= ORS=' '
Using jq for the third step ensures that the output is valid JSON:
jq -R 'match("([0-9]+). *(.*)") | .captures | {(.[0].string): { "Task": (.[1].string), "Solution": "" } }'
Here as one command line:
sed 's/^[0-9]\+\./\n&/' infile |
awk '$1=$1; { print "\n" }' RS= ORS=' ' |
jq -R 'match("([0-9]+). *(.*)") | .captures | {(.[0].string): { "Task": (.[1].string), "Solution": "" } }'