How to read/translate macOS 12 (Monterey) .ips crash files? - json

Starting in macOS 12 (Monterey), the system apparently writes crash file as .ips files, instead of the traditional .crash file format.
The file appears to contain JSON data:
{"app_name":"Helper","timestamp":"2021-10-30 18:49:32.00 +0100","app_version":"3.0.0(66) beta","slice_uuid":"673198dd-94ac-31a7-9e81-09fe6c781255","build_version":"3.0.0.66","platform":1,"bundleID":"com.dislt.helper","share_with_app_devs":0,"is_first_party":0,"bug_type":"309","os_version":"macOS 12.0.1 (21A559)","incident_id":"CC03C2EC-C1D4-4F6E-AA1F-6C4EC555D6B8","name":"Helper"}
{
"uptime" : 91000,
"procLaunch" : "2021-10-30 18:49:29.7791 +0100",
"procRole" : "Unspecified",
"version" : 2,
"userID" : 501,
"deployVersion" : 210,
"modelCode" : "MacBookPro14,3",
"procStartAbsTime" : 91844701503187,
"coalitionID" : 1244,
"osVersion" : {
"train" : "macOS 12.0.1",
"build" : "21A559",
"releaseType" : "User"
},
"captureTime" : "2021-10-30 18:49:32.4572 +0100",
"incident" : "92A89610-D70A-4D93-A974-A9018BB5C72A",
"bug_type" : "309",
"pid" : 77765,
"procExitAbsTime" : 91847378271126,
"cpuType" : "X86-64",
"procName" : "Helper",
...
When I preview the file or open it in the Console app, a traditional crash report is automatically generated:
-------------------------------------
Translated Report (Full Report Below)
-------------------------------------
Process: Helper [77765]
Path: /Users/USER/Library/Application Support/Helper.app/Contents/MacOS/Helper
Identifier: com.distl.helper
Version: 3.0.0(66) beta (3.0.0.66)
Code Type: X86-64 (Native)
Parent Process: TestBead [77726]
Responsible: TestBead [77726]
User ID: 501
Date/Time: 2021-10-30 18:49:32.4572 +0100
OS Version: macOS 12.0.1 (21A559)
Report Version: 12
Bridge OS Version: 3.0 (14Y908)
Anonymous UUID: CC03C2EC-C1D4-4F6E-AA1F-6C4EC555D6B8
Time Awake Since Boot: 91000 seconds
System Integrity Protection: enabled
Crashed Thread: 1 Dispatch queue: com.apple.NSXPCConnection.user.anonymous.77726
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x00007f780071a000
Exception Codes: 0x0000000000000001, 0x00007f780071a000
Exception Note: EXC_CORPSE_NOTIFY
Termination Reason: Namespace SIGNAL, Code 11 Segmentation fault: 11
Terminating Process: exc handler [77765]
...
I have customer support and development tools that scan these crash report files automatically, and I'd like to find out if there's a way to automate the translation of the JSON data back into the traditional crash report format?
I'd like to do this to (a) avoid rewriting my crash report scanning tools (although that wouldn't be impossible), and (b) automatically translate these files into a human readable format, without resorting to opening the file in the Console app.

I've run into the same problem. I haven't tried it myself yet, but someone has already created an ips2crash command available at GitHub. As the name implies, it should convert an .ips file to the (now) legacy crash report format.

Related

Telegraf json_v2 parser error: Unable to convert field to type int. strconv.ParseInt: parsing invalid syntax

Good day!
I have built a small IoT device that monitors the conditions inside a specific enclosure using an ESP32 and a couple of sensors. I want to monitor that data by publishing it to the ThingSpeak cloud, then writing it to InfluxDB with Telegraf and finally using the InfluxDB data source in Grafana to visualize it.
So far I have made everything work flawlessly, but with one small exception.
Which is: One of the plugins in my telegraf config fails with the error:
parsing metrics failed: Unable to convert field 'temperature' to type int: strconv.ParseInt: parsing "15.4": invalid syntax
The plugins are [inputs.http]] and [[inputs.http.json_v2]] and what I am doing with them is authenticating against my ThingSpeak API and parsing the json output of my fields. Then in my /etc/telegraf/telegraf.conf under [[inputs.http.json_v2.field]] I have added type = int as otherwise telegraf writes my metrics as Strings in InfluxDB and the only way to visualize them is using either a table or a single stat, because the rest of the flux queries fail with the error unsupported input type for mean aggregate: string. However, when I change to type = float in the config file I get a different error:
unprocessable entity: failure writing points to database: partial write: field type conflict: input field "temperature" on measurement "sensorData" is type float, already exists as type string dropped=1
I have a suspicion that I have misconfigured the parser plugin, however after hours of debugging I couldn't come up with a solution.
Some information that might be of use:
Telegraf version: Telegraf 1.24.2
Influxdb version: InfluxDB v2.4.0
Please see below for my telegraf.conf as well as the error messages.
Any help would be highly appreciated! (:
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 1000
metric_buffer_limit = 1000
collection_jitter = "0s"
flush_interval = "10s"
flush_jitter = "0s"
precision = ""
hostname = ""
omit_hostname = false
[[outputs.influxdb_v2]]
urls = ["http://localhost:8086"]
token = "XXXXXXXX"
organization = "XXXXXXXXX"
bucket = "sensor"
[[inputs.http]]
urls = [
"https://api.thingspeak.com/channels/XXXXX/feeds.json?api_key=XXXXXXXXXX&results=2"
]
name_override = "sensorData"
tagexclude = ["url", "host"]
data_format = "json_v2"
## HTTP method
method = "GET"
[[inputs.http.json_v2]]
[[inputs.http.json_v2.field]]
path = "feeds.1.field1"
rename = "temperature"
type = "int" #Error message 1
#type = "float" #Error message 2
Error when type = "float":
me#myserver:/etc/telegraf$ telegraf -config telegraf.conf --debug
2022-10-16T00:31:43Z I! Starting Telegraf 1.24.2
2022-10-16T00:31:43Z I! Available plugins: 222 inputs, 9 aggregators, 26 processors, 20
parsers, 57 outputs
2022-10-16T00:31:43Z I! Loaded inputs: http
2022-10-16T00:31:43Z I! Loaded aggregators:
2022-10-16T00:31:43Z I! Loaded processors:
2022-10-16T00:31:43Z I! Loaded outputs: influxdb_v2
2022-10-16T00:31:43Z I! Tags enabled: host=myserver
2022-10-16T00:31:43Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"myserver",
Flush Interval:10s
2022-10-16T00:31:43Z D! [agent] Initializing plugins
2022-10-16T00:31:43Z D! [agent] Connecting outputs
2022-10-16T00:31:43Z D! [agent] Attempting connection to [outputs.influxdb_v2]
2022-10-16T00:31:43Z D! [agent] Successfully connected to outputs.influxdb_v2
2022-10-16T00:31:43Z D! [agent] Starting service inputs
2022-10-16T00:31:53Z E! [outputs.influxdb_v2] Failed to write metric to sensor (will be
dropped: 422 Unprocessable Entity): unprocessable entity: failure writing points to
database: partial write: field type conflict: input field "temperature" on measurement
"sensorData" is type float, already exists as type string dropped=1
2022-10-16T00:31:53Z D! [outputs.influxdb_v2] Wrote batch of 1 metrics in 8.9558ms
2022-10-16T00:31:53Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
Error when type = "int"
me#myserver:/etc/telegraf$ telegraf -config telegraf.conf --debug
2022-10-16T00:37:05Z I! Starting Telegraf 1.24.2
2022-10-16T00:37:05Z I! Available plugins: 222 inputs, 9 aggregators, 26 processors, 20
parsers, 57 outputs
2022-10-16T00:37:05Z I! Loaded inputs: http
2022-10-16T00:37:05Z I! Loaded aggregators:
2022-10-16T00:37:05Z I! Loaded processors:
2022-10-16T00:37:05Z I! Loaded outputs: influxdb_v2
2022-10-16T00:37:05Z I! Tags enabled: host=myserver
2022-10-16T00:37:05Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"myserver",
Flush Interval:10s
2022-10-16T00:37:05Z D! [agent] Initializing plugins
2022-10-16T00:37:05Z D! [agent] Connecting outputs
2022-10-16T00:37:05Z D! [agent] Attempting connection to [outputs.influxdb_v2]
2022-10-16T00:37:05Z D! [agent] Successfully connected to outputs.influxdb_v2
2022-10-16T00:37:05Z D! [agent] Starting service inputs
2022-10-16T00:37:10Z E! [inputs.http] Error in plugin:
[url=https://api.thingspeak.com/channels/XXXXXX/feeds.json?
api_key=XXXXXXX&results=2]: parsing metrics failed: Unable to convert field
'temperature' to type int: strconv.ParseInt: parsing "15.3": invalid syntax
Fixed it by leaving type = float under [[inputs.http.json_v2.field]] in telegraf.conf and creating a NEW bucket with a new API key in Influx.
The issue was that the bucket sensor that I had previously defined in my telegraf.conf already had the field temperature created in my influx database from previous tries with its type set as last (aka: String) which could not be overwritten with the new type mean (aka: float).
As soon as I deleted all pre existing buckets everything started working as expected.
InfluxDB dashboard

Possible to set multiple slow log and error log on mysql module filebeat?

I have one development server and already installed
elasticsearch
kibana
filebeat
docker
on docker already running 2 container mariadb database.
I already set filebeat for 1 mariadb database.
with config on /etc/filebeat/modules.d/mysql.yml like this
- module: mysql
# Error logs
error:
enabled: true
var.paths: ["/media/dbdev1/data/mysql_error.log"]
# Slow logs
slowlog:
enabled: true
var.paths: ["/media/dbdev1/data/mysql_slow.log"]
if I need more error log and slow log from other container mariadb database is just change /etc/filebeat/modules.d/mysql.yml like this
- module: mysql
# Error logs
error:
enabled: true
var.paths: ["/media/dbdev1/data/mysql_error.log","/media/dbdev2/data/mysql_error.log"]
# Slow logs
slowlog:
enabled: true
var.paths: ["/media/dbdev1/data/mysql_slow.log","/media/dbdev2/data/mysql_slow.log"]
my expectation filebeat can pull mysql_error.log from 2 different mariadb container with different path also
some filebeat setup log
2021-10-25T10:43:25.616+0700 INFO instance/beat.go:665 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2021-10-25T10:43:25.619+0700 INFO instance/beat.go:673 Beat ID: cb340c7a-15b4-44f7-8a66-06f6850c1c0f
2021-10-25T10:43:26.499+0700 INFO [beat] instance/beat.go:1014 Beat info {"system_info": {"beat": {"path": {"config": "/etc/filebeat", "data": "/var/lib/filebeat", "home": "/usr/share/filebeat", "logs": "/var/log/filebeat"}, "type": "filebeat", "uuid": "cb340c7a-15b4-44f7-8a66-06f6850c1c0f"}}}
2021-10-25T10:43:26.501+0700 INFO [beat] instance/beat.go:1023 Build info {"system_info": {"build": {"commit": "5ae799cb1c3c490c9a27b14cb463dc23696bc7d3", "libbeat": "7.15.1", "time": "2021-10-07T22:06:49.000Z", "version": "7.15.1"}}}
2021-10-25T10:43:26.501+0700 INFO [beat] instance/beat.go:1026 Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":2,"version":"go1.16.6"}}}
2021-10-25T10:43:26.503+0700 INFO [beat] instance/beat.go:1030 Host info {"system_info": {"host": {"architecture":"x86_64","boot_time":"2021-10-25T09:45:23+07:00","containerized":false,"name":"localhost.localdomain","ip":["127.0.0.1/8","::1/128","10.0.2.20/24","fe80::a00:27ff:fe8c:82d0/64","192.168.131.5/24","fe80::a00:27ff:fedd:bb9e/64","172.17.0.1/16","172.18.0.1/16","fe80::42:89ff:fe04:e2cb/64","fe80::2c35:92ff:fe88:4daf/64","fe80::38b2:66ff:fe52:b1ec/64"],"kernel_version":"4.18.0-305.19.1.el8_4.x86_64","mac":["08:00:27:8c:82:d0","08:00:27:dd:bb:9e","02:42:ad:3d:07:6b","02:42:89:04:e2:cb","2e:35:92:88:4d:af","3a:b2:66:52:b1:ec"],"os":{"type":"linux","family":"redhat","platform":"centos","name":"CentOS Linux","version":"8","major":8,"minor":4,"patch":2105},"timezone":"WIB","timezone_offset_sec":25200,"id":"b14f68ad4b8c4732a4cfe379692179ec"}}}
2021-10-25T10:43:26.503+0700 INFO [beat] instance/beat.go:1059 Process info {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39"],"ambient":null}, "cwd": "/media/dbdev1", "exe": "/usr/share/filebeat/bin/filebeat", "name": "filebeat", "pid": 3503, "ppid": 1941, "seccomp": {"mode":"disabled","no_new_privs":false}, "start_time": "2021-10-25T10:43:23.920+0700"}}}
2021-10-25T10:43:26.503+0700 INFO instance/beat.go:309 Setup Beat: filebeat; Version: 7.15.1
2021-10-25T10:43:26.504+0700 INFO [index-management] idxmgmt/std.go:184 Set output.elasticsearch.index to 'filebeat-7.15.1' as ILM is enabled.
2021-10-25T10:43:26.517+0700 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2021-10-25T10:43:26.521+0700 INFO [publisher] pipeline/module.go:113 Beat name: localhost.localdomain
2021-10-25T10:43:26.585+0700 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2021-10-25T10:43:26.820+0700 INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.15.0
2021-10-25T10:43:26.895+0700 INFO [index-management] idxmgmt/std.go:261 Auto ILM enable success.
2021-10-25T10:43:26.929+0700 INFO [index-management.ilm] ilm/std.go:170 ILM policy filebeat exists already.
2021-10-25T10:43:26.929+0700 INFO [index-management] idxmgmt/std.go:401 Set setup.template.name to '{filebeat-7.15.1 {now/d}-000001}' as ILM is enabled.
2021-10-25T10:43:26.929+0700 INFO [index-management] idxmgmt/std.go:406 Set setup.template.pattern to 'filebeat-7.15.1-*' as ILM is enabled.
2021-10-25T10:43:26.929+0700 INFO [index-management] idxmgmt/std.go:440 Set settings.index.lifecycle.rollover_alias in template to {filebeat-7.15.1 {now/d}-000001} as ILM is enabled.
2021-10-25T10:43:26.929+0700 INFO [index-management] idxmgmt/std.go:444 Set settings.index.lifecycle.name in template to {filebeat {"policy":{"phases":{"hot":{"actions":{"rollover":{"max_age":"30d","max_size":"50gb"}}}}}}} as ILM is enabled.
2021-10-25T10:43:26.974+0700 INFO template/load.go:229 Existing template will be overwritten, as overwrite is enabled.
2021-10-25T10:43:28.637+0700 INFO [add_cloud_metadata] add_cloud_metadata/add_cloud_metadata.go:101 add_cloud_metadata: hosting provider type not detected.
2021-10-25T10:43:31.539+0700 INFO template/load.go:132 Try loading template filebeat-7.15.1 to Elasticsearch
2021-10-25T10:43:32.442+0700 INFO template/load.go:124 Template with name "filebeat-7.15.1" loaded.
2021-10-25T10:43:32.442+0700 INFO [index-management] idxmgmt/std.go:297 Loaded index template.
2021-10-25T10:43:32.475+0700 INFO [index-management.ilm] ilm/std.go:126 Index Alias filebeat-7.15.1 exists already.
2021-10-25T10:43:32.476+0700 INFO kibana/client.go:167 Kibana url: http://localhost:5601
2021-10-25T10:43:38.391+0700 INFO kibana/client.go:167 Kibana url: http://localhost:5601
2021-10-25T10:44:58.953+0700 INFO instance/beat.go:848 Kibana dashboards successfully loaded.
2021-10-25T10:44:58.976+0700 WARN [cfgwarn] instance/beat.go:574 DEPRECATED: Setting up ML using Filebeat is going to be removed. Please use the ML app to setup jobs. Will be removed in version: 8.0.0
2021-10-25T10:44:58.993+0700 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2021-10-25T10:44:59.006+0700 INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.15.0
2021-10-25T10:44:59.006+0700 INFO kibana/client.go:167 Kibana url: http://localhost:5601
2021-10-25T10:44:59.098+0700 WARN fileset/modules.go:425 X-Pack Machine Learning is not enabled
2021-10-25T10:44:59.207+0700 WARN fileset/modules.go:425 X-Pack Machine Learning is not enabled
2021-10-25T10:44:59.207+0700 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2021-10-25T10:44:59.212+0700 INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.15.0
2021-10-25T10:44:59.214+0700 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2021-10-25T10:44:59.219+0700 INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.15.0
2021-10-25T10:44:59.351+0700 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {"pipeline": "filebeat-7.15.1-mysql-error-pipeline"}
2021-10-25T10:44:59.480+0700 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {"pipeline": "filebeat-7.15.1-mysql-slowlog-pipeline"}
2021-10-25T10:44:59.480+0700 INFO cfgfile/reload.go:262 Loading of config files completed.
2021-10-25T10:44:59.481+0700 INFO [load] cfgfile/list.go:129 Stopping 1 runners ...

Triton inference server serving TorchScript model

I am trying to serve a TorchScript model with the triton (tensorRT) inference server. But every time I start the server it throws the following error:
PytorchStreamReader failed reading zip archive: failed finding central directory
My folder structure is :
<model_repository>
<model_name>
config.pbtxt
<1>
<model.pt>
My config.pbtxt file is :
name: "model"
platform: "pytorch_libtorch"
max_batch_size: 1
input[
{
name: "INPUT__0"
data_type: TYPE_FP32
dims: [-1,3,-1,-1]
}
]
output:[
{
name: "OUTPUT__0"
data_type: TYPE_FP32
dims: [-1,1,-1,-1]
}
]
I found the solution. It was a silly mistake on my part. The .pt torchscript file was not loaded properly.

Autodesk Forge: WorkItem failing due to AppPackage Issues

My AppPackage fails to load, and I'm unable to find the exact answer in the documentation or by the error message/code.
I tested the bundle by unzipping it into the "C:\Program Files\Autodesk\ApplicationPlugins" on my local machine, and it runs/loads as expected.
The AppPackage indicates that it is created successfully, which I'm sure is the most up-to-date version.
The addin is a .NET DLL file.
Error Report Message
[02/15/2019 18:44:48] Starting work item ffbcfc1ca50546fc9a6372424b2cdae1
[02/15/2019 18:44:48] Start download phase.
[02/15/2019 18:44:48] Start downloading file <CENSORED>.
[02/15/2019 18:44:48] Start preparing AppPackage <CENSORED>.
[02/15/2019 18:44:48] Download bits and install app to local cache.
[02/15/2019 18:44:48] End downloading file <CENSORED>.
[02/15/2019 18:44:48] End download phase.
[02/15/2019 18:44:48] Error: Failed to prepare app package(s).
[02/15/2019 18:44:48] Error: An unexpected error happened during phase Downloading of job.
[02/15/2019 18:44:48] Job finished with result FailedEnvironmentSetup
PackageContents.XML
<?xml version="1.0" encoding="utf-8" ?>
<ApplicationPackage SchemaVersion="1.0" AutodeskProduct="AutoCAD"
AppVersion="0.1.0"
ProductType="Application"
Name="CENSORED"
Description="CENSORED"
Author="CENSORED"
FriendlyVersion="0.1.0"
ProductCode="{CENSORED}"
UpgradeCode="{CENSORED}"
Helpfile="./help.html"
Icon="./my-icon.jpeg">
<CompanyDetails Name="CENSORED" Phone="CENSORED" Email="CENSORED"/>
<Components>
<RuntimeRequirements SeriesMin="R22.0" Platform="AutoCAD*" OS="Win64"/>
<ComponentEntry AppName="CENSORED" Version="0.1.0" ModuleName="./CENSORED.dll" AppType=".Net"
AppDescription="CENSORED" LoadOnAutoCADStartup="True">
</ComponentEntry>
</Components>
</ApplicationPackage>
Activity Definition:
Note I had to manually expand some inline functions here, since I have this broken into multiple parts. If I have a typo, rest assured the code actually runs syntactically.
let activity = <CreateActivityRequest>{
Id: id,
Version: 1,
IsPublic: false,
AppPackages: ['PACKAGE_NAME'],
Instruction: {Script: 'D6 '},
RequiredEngineVersion: '22.0',
Parameters: {
InputParameters: [{Name: 'HostDwg', LocalFileName: '$(HostDwg)'}],
OutputParameters: [{Name: 'output', LocalFileName: `output.json`}]
},
HostApplication: undefined,
AllowedChildProcesses: []
};
Entry from AppPackages Listing:
{
References: [],
Resource: '...',
RequiredEngineVersion: '22.0',
IsPublic: false,
IsObjectEnabler: false,
Version: 1,
Timestamp: '2019-02-15T19:32:33.527Z',
Description: '',
Id: 'CENSORED'
},
Make sure to double check how you zipped the AppPackage you uploaded. If you look inside your zip file, make sure you have a folder with the name PACKAGE_NAME.bundle and the PackageContents.XML file is inside that PACKAGE_NAME.bundle folder.

Error reported while running Laucher by chisel

I downloaded the chisel-tutorial which is offered on the website of usb-bar.
In order to do practise I created a scala file named as "Regfile.scala" under the path:
"chisel-tutorial/src/main/scala/solutions/Regfile.scala".
The Test-file is stored under the path :
"chisel-tutorial/src/test/scala/solutions/RegfileTests.scala".
While running the sbt I was reported
(after execution of command "test:run-main solutions.Launcher Regfile"):
"Errors: 1: in the following tutorials
Bad tutorial name: Regfile "
How can I solve this problem?
You have to add your Regfile to Launcher.scala. The launcher is available in directory :
src/test/scala/solutions/Launcher.scala
I think you can add somethings like this to Launch.scala to test your Regfile:
"Regfile" -> { (backendName: String) =>
Driver(() => new Regfile(), backendName) {
(c) => new RegfileTests(c)
}
},