I am using WebHID in Chrome to communicate with a USB-enabled digital scale. I'm able to connect to the scale and subscribe to a stream of weight data as follows:
// Get a reference to the scale.
// 0x0922 is the vendor of my particular scale (Dymo).
let device = await navigator.hid.requestDevice({filters:[{vendorId: 0x0922}]});
// Open a connection to the scale.
await device[0].open();
// Subscribe to scale data inputs at a regular interval.
device[0].addEventListener("inputreport", event => {
const { data, device, reportId } = event;
let buffArray = new Uint8Array(data.buffer);
console.log(buffArray);
});
I now receive regular input in the format Uint8Array(5) [2, 12, 255, 0, 0], where the fourth position is the weight data. If I put something on the scale, it changes to Uint8Array(5) [2, 12, 255, 48, 0] which is 4.8 pounds.
I would like to zero (tare) the scale so that its current, encumbered state becomes the new zero point. After a successful zeroing, I would expect the scale to start returning Uint8Array(5) [2, 12, 255, 0, 0] again. My current best guess at this is:
device[0]
.sendReport(0x02, new Uint8Array([0x02]))
.then(response => { console.log("Sent output report " + response) });
This is based on the following table from the HID Point of Sale Usage Tables:
The first byte is the Report ID, which is 2 as per the table. For the second byte, I want the ZS operation set to 1, thus 00000010, thus also 2. sendReport takes the Report Id as the first parameter, and an array of all following data as the second parameter. When I send this to the device, it isn't rejected, but it doesn't zero the scale, and response is undefined.
How can I zero this scale using WebHID?
So I ended up at a very similar place - trying to programmatically zero a USB scale. Setting the ZS did not seem to do anything. Used Wireshark + Stamps.com app to see how they were doing it and noticed that what was sent was actually the Enforced Zero Return, i.e, 0x02 0x01 (Report Id = 2, EZR). Now have this working.
Related
I'm trying to make a reinforcement learning project for a popular portuguese card game. I have the environment working. The code can run on it's own, for n rounds, using random picks.
For a quick explanation, the game is similar to Hearts. It has 4 players in 2 teams. Each players has 10 cards, playing following suit, when possible, until no cards left. Each trick earns points, after 10 tricks the team with more than 60 points wins round.
First, i had some doubts on how to "encode" the cards/deck. I'm passing to the model the cards on table, cards on hand and already played cards. I one-hot encoded the deck, 4 suits, 10 numbers/figures. I also have to include the trump (taken from the begining of the round). So as an example 1 card looks like: [ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ], where the first four numbers are suit, and the last is trump (boolean) and the rest is what card (2, 3, 4, 5, 6, j, q, k, 7, a). I'm passing the deck for each hand, table, played, so 40*3, 15 features each (1800 features when flattened). Possibly this is not the best approach, if someone could advise on this would be great.
With this data, i'm setting 1 player as the AI agent. When it's AI turn, state is cards in hand, cards on table, cards played. For next state i'm passing the trick end, with the same type of data. Reward from that trick is a cumulative value (max 120 points). When i run the code, the loss outputs a value initialy, like the first time, then displays NaN. So predictions are coming out as NaN too (i have 10 outputs, 1 for each card in hand. Also having some doubts on this one because they start with 10 cards, but as the game goes and the cards get played, this number goes to zero.)
Here's the code for the training:
async expReplay() {
console.debug('Training...')
const minibatch = await this.memory.concat().sort(() => .5 - Math.random()).slice(0, this.batchSize)
for (let i = 0; i < minibatch.length - 1; i++) {
let [state, action, reward, next_state, done] = minibatch[i]
state = await tf.concat(state).flatten().reshape([1, this.stateSize])
next_state = await tf.concat(next_state).flatten().reshape([1, this.stateSize])
let target = reward
if (!done) {
let predictNext = await this.model.predict(next_state)
target = reward + this.gamma * predictNext.argMax().dataSync()[0]
}
let target_f = await this.model.predict(state).dataSync()
target_f[action] = target
target_f = await tf.tensor2d(target_f, [1, this.actionSize])
await this.model.fit(state, target_f, {
epochs: 1,
verbose: 1,
callbacks: {
onEpochEnd: (epoch, logs) => {
process.stdout.write(`${logs.loss} ${logs.acc} \r`)
}
}
})
await state.dispose()
await next_state.dispose()
await target_f.dispose()
}
if (this.epsilon > this.epsilonMin) {
this.epsilon *= this.epsilonDecay
}
return 'Training... stop!'
}
This loop, i've used it before, on a DQN also for a Bitcoin trader i was experimenting and it worked fine. So i'm guessing my data is wrong somewhere. I've logged the state and next_state to check for NaN but didn't catch any...
If you need more info please ask!
I am trying to implement a chart with a ColumnRenderableSeries3D series, but with a small amount of data points (25x25) the columns are nearly invisible. At higher numbers of data points (100x100) with a wider range of values this problem becomes even worse and a Moiré pattern appears. What can be done to significantly increase the column's diameter, so they are easily seen and so the Moiré pattern disappears?
If it is relevant this is being rendered on a VM with a VMware ESXi 6.5 SVGA adapter on Windows Server 2016, over a RemoteDesktop connection. Surprisingly even though 3D support isnt enabled for the VM, SciChart.Examples.Demo.exe says DirectX hardware acceleration is enabled. The version of SciChart is 5.1.0.11405 and SharpDX is 4.0.1.
SciChart3DSurface SciChartSurface3d = new SciChart3DSurface();
XyzDataSeries3D<Double, Double, DateTime> MyXyzDataSeries = new XyzDataSeries3D<Double, Double, DateTime>();
SciChartSurface3d.XAxis = new NumericAxis3D();
SciChartSurface3d.YAxis = new NumericAxis3D();
SciChartSurface3d.ZAxis = new DateTimeAxis3D();
SciChartSurface3d.Camera = new Camera3D() { ZoomToFitOnAttach = true };
SciChartSurface3d.WorldDimensions = new Vector3(200, 100, 200);
SciChartSurface3d.RenderableSeries.Add(new ColumnRenderableSeries3D() { DataSeries= MyXyzDataSeries, ColumnShape = typeof(CubePointMarker3D), DataPointWidthX = 1.0, Opacity = 1.0 });
SciChartSurface3d.BorderThickness = new Thickness(0);`
SomeMethodToLoadTheDataSeries();
25x25
100x100
Edit
Changing DataPointWidthX to DataPointWidth doesn't help. With a width of 1.0:
There are two modes of the column width definition:
First and the default is called MaxNonOverlapping. In this mode, the maximum possible width is calculated where any column has enough space not to overlap others.
Second is called FixedSize. In this mode, the width of a column is defined by a value from the ColumnRenderableSeries3D.CoulmnFixedSize property.
Definition of the mode is performed over the ColumnRenderableSeries3D.ColumnSpacingMode property. Below is the example how to setup fixed size column chart:
var renderableSeries3D = new ColumnRenderableSeries3D();
renderableSeries3D.ColumnSpacingMode = ColumnSpacingMode.FixedSize;
renderableSeries3D.CoulmnFixedSize = 25;
Note, a value of the CoulmnFixedSize property represents coordinates space. Thus it is related to the SciChart3DSurface.WorldDimensions. You can find more information about the coordinates space here.
I am using graphhopper 0.8 via maven in my java project. I create a network with the folling code
FlagEncoder encoder = new CarFlagEncoder();
EncodingManager em = new EncodingManager(encoder);
// Creating and saving the graph
GraphBuilder gb = new GraphBuilder(em).
setLocation(testDir).
setStore(true).
setCHGraph(new FastestWeighting(encoder));
GraphHopperStorage graph = gb.create();
for (Node node : ALL NODES OF MY NETWORK) {
graph.getNodeAccess().setNode(uniqueNodeId, nodeX, nodeY);
}
for (Link link : ALL LINKS OF MY NETWORK) {
EdgeIteratorState edge = graph.edge(fromNodeId, toNodeId);
edge.setDistance(linkLength);
edge.setFlags(encoder.setProperties(linkSpeedInMeterPerSecond * 3.6, true, false));
}
Weighting weighting = new FastestWeighting(encoder);
PrepareContractionHierarchies pch = new PrepareContractionHierarchies(graph.getDirectory(), graph, graph.getGraph(CHGraph.class), weighting, TraversalMode.NODE_BASED);
pch.doWork();
graph.flush();
LocationIndex index = new LocationIndexTree(graph.getBaseGraph(), graph.getDirectory());
index.prepareIndex();
index.flush();
At this point, the bounding box saved in the graph shows the correct numbers. Files are written to disk including the "location_index". However, reloading the data gets me the following error
Exception in thread "main" java.lang.IllegalStateException: Cannot create location index when graph has invalid bounds: 1.7976931348623157E308,1.7976931348623157E308,1.7976931348623157E308,1.7976931348623157E308
at com.graphhopper.storage.index.LocationIndexTree.prepareAlgo(LocationIndexTree.java:132)
at com.graphhopper.storage.index.LocationIndexTree.prepareIndex(LocationIndexTree.java:287)
The reading is done with the following code
FlagEncoder encoder = new CarFlagEncoder();
EncodingManager em = new EncodingManager(encoder);
GraphBuilder gb = new GraphBuilder(em).
setLocation(testDir).
setStore(true).
setCHGraph(new FastestWeighting(encoder));
// Load and use the graph
GraphHopperStorage graph = gb.load();
// Load the index
LocationIndex index = new LocationIndexTree(graph.getBaseGraph(), graph.getDirectory());
if (!index.loadExisting()) {
index.prepareIndex();
}
So LocationIndexTree.loadExisting runs fine until entering prepareAlgo. At this point, the graph is loaded. However, the bounding box is not set and kept at the defaults?! Reading the location index does not update the bounding box. Hence, the error downstreams. What am I doing wrong? How do I preserve the bounding box in the first place? How to reconstruct the bbox?
TL;DR Don't use cartesian coordinates but stick to the WGS84 used by OSM.
A cartesian coordinate system like e.g. EPSG:25832 may have coordinates in the range of millions. After performing some math the coordinates may further increase in magnitude. Eventually, Graphhopper will store the coordinates as integers. That is, all coordinates may end up as Integer.MAX_VALUE. Hence, an invalid bounding box.
(my code is written in Java but the question is agnostic; I'm just looking for an algorithm idea)
So here's the problem: I made a method that simply finds the median of a data set (given in the form of an array). Here's the implementation:
public static double getMedian(int[] numset) {
ArrayList<Integer> anumset = new ArrayList<Integer>();
for(int num : numset) {
anumset.add(num);
}
anumset.sort(null);
if(anumset.size() % 2 == 0) {
return anumset.get(anumset.size() / 2);
} else {
return (anumset.get(anumset.size() / 2)
+ anumset.get((anumset.size() / 2) + 1)) / 2;
}
}
A teacher in the school that I go to then challenged me to write a method to find the median again, but without using any data structures. This includes anything that can hold more than one value, so that includes Strings, any forms of arrays, etc. I spent a long while trying to even conceive of an idea, and I was stumped. Any ideas?
The usual algorithm for the task is Hoare's Select algorithm. This is pretty much like a quicksort, except that in quicksort you recursively sort both halves after partitioning, but for select you only do a recursive call in the partition that contains the item of interest.
For example, let's consider an input like this in which we're going to find the fourth element:
[ 7, 1, 17, 21, 3, 12, 0, 5 ]
We'll arbitrarily use the first element (7) as our pivot. We initially split it like (with the pivot marked with a *:
[ 1, 3, 0, 5, ] *7, [ 17, 21, 12]
We're looking for the fourth element, and 7 is the fifth element, so we then partition (only) the left side. We'll again use the first element as our pivot, giving (using { and } to mark the part of the input we're now just ignoring).
[ 0 ] 1 [ 3, 5 ] { 7, 17, 21, 12 }
1 has ended up as the second element, so we need to partition the items to its right (3 and 5):
{0, 1} 3 [5] {7, 17, 21, 12}
Using 3 as the pivot element, we end up with nothing to the left, and 5 to the right. 3 is the third element, so we need to look to its right. That's only one element, so that (5) is our median.
By ignoring the unused side, this reduces the complexity from O(n log n) for sorting to only O(N) [though I'm abusing the notation a bit--in this case we're dealing with expected behavior, not worst case, as big-O normally does].
There's also a median of medians algorithm if you want to assure good behavior (at the expense of being somewhat slower on average).
This gives guaranteed O(N) complexity.
Sort the array in place. Take the element in the middle of the array as you're already doing. No additional storage needed.
That'll take n log n time or so in Java. Best possible time is linear (you've got to inspect every element at least once to ensure you get the right answer). For pedagogical purposes, the additional complexity reduction isn't worthwhile.
If you can't modify the array in place, you have to trade significant additional time complexity to avoid avoid using additional storage proportional to half the input's size. (If you're willing to accept approximations, that's not the case.)
Some not very efficient ideas:
For each value in the array, make a pass through the array counting the number of values lower than the current value. If that count is "half" the length of the array, you have the median. O(n^2) (Requires some thought to figure out how to handle duplicates of the median value.)
You can improve the performance somewhat by keeping track of the min and max values so far. For example, if you've already determined that 50 is too high to be the median, then you can skip the counting pass through the array for every value that's greater than or equal to 50. Similarly, if you've already determined that 25 is too low, you can skip the counting pass for every value that's less than or equal to 25.
In C++:
int Median(const std::vector<int> &values) {
assert(!values.empty());
const std::size_t half = values.size() / 2;
int min = *std::min_element(values.begin(), values.end());
int max = *std::max_element(values.begin(), values.end());
for (auto candidate : values) {
if (min <= candidate && candidate <= max) {
const std::size_t count =
std::count_if(values.begin(), values.end(), [&](int x)
{ return x < candidate; });
if (count == half) return candidate;
else if (count > half) max = candidate;
else min = candidate;
}
}
return min + (max - min) / 2;
}
Terrible performance, but it uses no data structures and does not modify the input array.
I try to record PCM sound from flash (using Microphone class). I use org.bytearray.micrecorder.MicRecorder helper class.
In Microphone class I cannot find property like bitDepth or bitsPerSample.
I always get 32 bits.
Is it possible to do?
UPDATE: The asker John812 was able to solve this by using..
bit16_bytes.writeShort( data.readFloat() * 32767 ); see comments below for context
METHOD #2: Based on my experience with using the LoadPCMfromByteArray method
I have something you could try but I've only used it with an actual 32bit WAVE file and played via the LoadPCMFromByteArray command.
The AS3 Microphone Class records 32 bits. You have to write the conversion of samples to a different bit-depth by yourself. I have no idea how many samples you are processing but the general code below shows you how to convert. Note: * 512 means use your actual samples amount (example: * 4096? or * 8192?) If you get the numbers wrong there'll be hiss/distortion so either experiment from small or provide the full details in your question for a more helpful edit/answer.
CONVERT: Assuming your recorded byteArray is called data
public var bit16_bytes : ByteArray; //will hold the 16bit version
public function convert_to16Bit () : void
{
bit16_bytes = new ByteArray(); data.position = 0;
while (bit16_bytes.position < data.length - 4)
//if you get noise/distortion try either: 256, 512, 1024, 2048, 4096 or 8192
{ bit16_bytes.writeShort( data.readInt() * 512 ); } //multiply by samples amount
data = new ByteArray(); //recycle for re-use
bit16_bytes.position = 0; //reset or else E-O-File error
bit16_bytes.readBytes( data ); //copy 16bit back into Data byte-array
}
To run the above function whenever you're ready just add the line convert_to16Bit(); inside whatever function deals with your "recording complete" situation.