When I have the junit reporter enabled, my console output is swamped with lines reading:
WARN [reporter.junit]: Sonarqube may fail to parse this report since the test file was not found at [...]
I understand the reason for this is that my spec names don't conform to the special format but what if I don't care about Sonarqube? Can I just suppress these warnings?
Related
The setting I am referencing is shown in the snippet bellow
{
"compilerOptions": {
"resolveJsonModule": true,
}
}
I don't really understand why TS language engineers would add a flag for "resolveJsonModule"? Either an environment supports resolving JSON as module via an import statement (or require() method), or the environment doesn't. Why bother with the extra complexity?
Context
Historically, Node has included a specialized JSON loader (unrelated to ECMA standards) to allow importing JSON data only in CommonJS mode.
Standardized importing of anything at all (ES modules) is only a relatively recent phenomenon in ECMAScript. Importing text files containing valid JSON, parsed as native JS data ("importing JSON") is described in a proposal that is still only in stage 3.
However, there has been recent movement in regard to implementation of the above mentioned proposal:
V8 implemented it in June (Chrome 91+)
TypeScipt v4.5.0 implemented it in November
Deno v1.17.0 implemented it in December
Node LTS v16.14.0 implemented it last Tuesday (behind a CLI flag --experimental-json-modules)
TypeScript
TypeScript is a static type-checker, but also a compiler (technically a transpiler), and transforms your TS source code syntax into a syntax that is valid JavaScript for the runtime environment you have specified in your TSConfig. Because there are different runtime environments with different capabilities, the way that you configure the compiler affects the transformed JavaScript that is emitted. In regard to defaults, the compiler uses an algorithmic logic to determine settings. (I can't summarize that here: you honestly have to read the entire reference in order to understand it.) Because loading of JSON data has been a non-standard, specialized operation until extremely recently, it has not been a default.
Alternatives
All JS runtimes offer alternatives to an import statment for importing of textual JSON data (which can then be parsed using JSON.parse), and none of them require configuring the compiler in the ways that you asked about:
Note: the data parsed from the JSON strings imported using these methods will not participate in the "automatic" type inference capabilities of the compiler module graph because they aren't part of the compilation graph: so they'll be typed as any (or possibly unknown in an extremely strict configuration).
Browser and Deno: window.fetch
Deno: Deno.readTextFile
Node: fs.readFile
Additionally, because all JSON (JavaScript Object Notation) is valid JS, you can simply prepend the data in your JSON file with export default , and then save the file as data.js instead of data.json, and then import it as a standard module: import {default as data} from './data.js';.
Final notes about inferred types:
I prefer to audit the JSON I'm importing and use my own manually-written types (written either by myself or someone else: imported from a module/declaration file) for the data, rather than relying on the compiler's inferred types from import statements (which I have found to be too narrow on many occasions), by assigning the parsed JSON data to a new variable using a type assertion.
I have a test documentation tool which accepts reports in XML and JSON. I need to attach screenshots to every test case even the passed ones. Unfortunately, the tool (xRay for Jira) can only digest screenshots in a form of JSON and cannot in XML.
I know that cucumber does reports in json but I do not want tests to be BDD-like.
Is there a test runner, which can do reporting in JSON or a solution on how to convert JUnit 5 XML report to appropriate JSON format with screenshots in Base64.
Current set up is Java/Gradle/JUnit5/Selenide but can be reviewed.
Importing attachments, as of today, is supported if you use Xray JSON or Cucumber JSON reports.
The only way, right now, would be to implement either a JUnit5 TestExecutionListener or a TestWatcher that would generate a Xray JSON report.
Note: in the short term, support will be added for JUnit 5 and also for TestNG; currently, this is experimental and not yet supported in the product but please raise a support request asking for this improvement, so the team can track your interest on it. The URLs for the previous repos will probably change.
Newman Version (can be found via newman -v): 4.2.2
OS details (type, version, and architecture): Windows 10 Pro Version 1803 Running all files locally, but hitting internal API
Are you using Newman as a library, or via the CLI? CLI
Did you encounter this recently, or has this bug always been there: This is a new collection
Expected behaviour: I need to use a CSV file to import data into the response body of POST requests. All values MUST be strings. My CSV works correctly in POSTMAN, but fails with error: Invalid closing quote at line 2; found """ instead of delimiter "," in NEWMAN.
Command / script used to run Newman: newman run allPatients.postman_collection.json -e New_QA.postman_environment.json -d 2.csv
Sample collection, and auxiliary files (minus the sensitive details):
In POSTMAN, when I run the requests, all values are strings and must be surrounded by doubled quotes. I use a CSV file that looks like this:
"bin","pcn","group_id","member_id","last_name","first_name","dob","sex","pharmacy_npi","prescriber_npi"
"""012353""","""01920000""","""TESTD 273444""","""Z9699879901""","""Covg""","""MC""","""19500101""","""2""","""1427091255""","""1134165194"""
When I run the same CSV data file in NEWMAN, I get the error above. I have tried a few options I've seen on this forum without any luck such as using Escape syntax for double quotes such as:
"/"text/""
The only things I've tried that have not failed pre-run with an error like above include removing the double-quotes entirely or replacing them with single-quotes. When I do this, I get 400 Bad Request, which I suspect is due to me sending invalid data-types.
Please close this issue. It was the result of human error.
I was able to fix this by correctly using the syntax suggested elsewhere.
"bin","pcn","group_id","member_id","last_name","first_name","dob","sex","pharmacy_npi","prescriber_npi"
"\"012353\"","\"01920000\"","\"TESTD 273444\"","\"Z9699879901\"","\"Covg\"","\"MC\"","\"19500101\"","\"2\"","
\"1427091255\"","\"1134165194\""
I have a Nifi flow which processes data from the webhose API, webhose returns a whole webpage of text in its result as a attribute in Json. When I try to extract this using EvaluateJsonPath processor and write it to a new attribute it gives me the "nifi processor exception repository failed to update" error, the content is encoded in utf8 and I know that there is a limitation of 65535 bytes for an attribute in Json. Is there a workaround for this.
I believe this limitation should be resolved in Apache NiFi 1.2.0 from this JIRA:
https://issues.apache.org/jira/browse/NIFI-3389
Also, keep in mind that having a lot of large attributes is not ideal for performance.
I have a method for parsing a config file into a dictionary. I'm unsure how it should behave if a config parameter is missing. Should it use a default value and log an error or raise an exception?
I suggest that you separate: (1) the parsing of the configuration file and storing the parsed details into a dictionary, from (2) retrieving name=value pairs from the dictionary. This separation of concerns will then enable you to provide an overloaded API for (2) that specifies if a missing name=value pair should result in a default value being returned or an exception being raised. For example (pseudo code):
cfg = parseConfigurationFile("example.cfg")
x = cfg.lookupString("x"); // throws an exception if the name=value is missing
y = cfg.lookupString("y", "hello, World!"); // returns default value if name=value is missing
I also suggest that the API should provide type-safe lookup methods such as lookupInt(), lookupBoolean(), lookupDouble() and so on. Those methods should throw an exception if a looked-up value cannot be parsed into the specified type.
Edit to respond to a comment
"Thanks for the example. I was more wondering if it was a good idea to even
provide default settings and start the application if the config was wrong."
I like the Fail Fast Principle, so I recommend that if any configuration data is invalid, your application should report an error and stop, rather than try to silently repair the error (perhaps by using a default value instead of a bad configuration value) and continue on.
However, I don't think you should necessarily view missing name=value pairs as being an error. Instead, it is valid to use a default value for a missing value. If you take this to an extreme by allowing all the configuration name=value pairs to be optional, then your application will be able to work "out of the box" without any configuration file at all, which arguably improves the application's ease of use for new users.
A few years ago, I wrote Config4* (C++ and Java libraries for parsing a particular configuration-file syntax). Config4* provides an elegant way to enable any/all name=value pairs to be optional: it's what the Config4* manual calls fallback configuration. If you want to learn about that, then I suggest you skim-read Chapter 2 of the Config4* Getting Started Guide to get an understanding of the configuration syntax, and then read Chapter 3 of the same manual to understand the API. Pay particular attention to Sections 3.6.2 (Parsing Embedded Configuration) and 3.6.3 (Using Fallback Configuration).