I used usbmon to analyse usb packets, and implement it in webusb however I wasn't able to find a solution for this. This is what Sane send to usb :
S Co:1:074:0 s 02 01 0000 0081 0000 0
C Co:1:074:0 0 0
S Co:1:074:0 s 02 01 0000 0002 0000 0
C Co:1:074:0 0 0
Which is similar to a controlTransferOut() command, with requestType=Standard, recipient: 'endpoint', request: 1, index: 0x00, value:129
The 'value' here is very tricky since all other parameters should be correct according to documentation, however sending value:129 should send something like :
S Co:1:074:0 s 02 01 0081 0000 0000 0
However what I got instead is :
Uncaught (in promise) DOMException: The specified endpoint number is out of range.
While value is an unsigned short, which is max 0xffff ! So obviously value should be 0, and next nibble 0x0081. My question is how to trigger a Control Output (Co) with value in second nibble ?
The code is something like this :
navigator.usb.requestDevice({ filters: [{ vendorId: 0x1083}] })
.then(selectedDevice => {
device = selectedDevice;
return device.open(); // Begin a session.
})
.then(() => device.selectConfiguration(1))
.then(() => device.claimInterface(0))
.then(() => device.controlTransferOut({
requestType: 'standard',
recipient: 'endpoint',
request: 0x00,
value: 129,
index: 0x00}))
All other combinations are sent with response "Stall" for example (class, interface : 21 - vendor, device : 40 ...etc).
Device description and Endpoint descriptor are available here
Thank you
Just found it, request should be :
device.controlTransferOut({
requestType: 'standard',
recipient: 'endpoint',
request: 1,
value: 0,
index: 129})
This give :
S Co:1:075:0 s 02 01 0000 0081 0000 0
C Co:1:075:0 0 0
Which is exactly what I need.
Related
I configured Artemis to redeliver faild messages like this (broker.xml):
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>`enter code here`
<max-delivery-attempts>3</max-delivery-attempts>
<redelivery-delay>30000</redelivery-delay>
<max-redelivery-delay>30000</max-redelivery-delay>
</address-setting>
[...]
<address-setting match="email">
<redelivery-delay-multiplier>3</redelivery-delay-multiplier>
<redelivery-delay>60000</redelivery-delay>
<max-redelivery-delay>86400000</max-redelivery-delay>
</address-setting>
[...]
<addresses>
<address name="email">
<anycast>
<queue name="email" />
</anycast>
</address>
</addresses>
I use Springboot as sender and receiver. After sending a message to the queue it appears in artemis' queue and the receiver is called. The processing fails. But instead of putting the message back to the queue and try to redeliver it, this happens:
09:57:37,672 WARN [org.apache.activemq.artemis.core.server] AMQ222149: Message Reference[53]:RELIABLE:CoreMessage[messageID=53,durable=true,userID=null,priority=4, timestamp=Mon Jan 29 09:57:37 UTC 2018,expiration=0, durable=
true, address=email,properties=TypedProperties[__HDR_dlqDeliveryFailureCause=java.lang.Throwable: Delivery[1] exceeds redelivery policy limit:RedeliveryPolicy {destination = null, collisionAvoidanceFactor = 0.15, maximumRedeli
veries = 0, maximumRedeliveryDelay = -1, initialRedeliveryDelay = 1000, useCollisionAvoidance = false, useExponentialBackOff = false, backOffMultiplier = 5.0, redeliveryDelay = 1000, preDispatchCheck = true}, cause:null,__HDR_
BROKER_IN_TIME=1517219857591,interaction_id=b0b5cb2c618f4605a11ad599dfd7f54b,_AMQ_ROUTING_TYPE=1,__HDR_ARRIVAL=0,__HDR_GROUP_SEQUENCE=0,__HDR_COMMAND_ID=36,__HDR_MARSHALL_PROP=[0000 0001 000E 696E 7465 7261 6374 696F 6E5F 6964
0900 2062 3062 3563 6232 6336 3138 6634 3630 3561 3131 6164 3539 3964 6664 3766 3534 62),__HDR_PRODUCER_ID=[0000 003D 7B01 0029 4944 3A64 652D 6164 6E2D 6266 786A 6E63 322D 3532 3232 ... 37 3231 3934 3238 3032 362D 313A 3100
0000 0000 0000 0100 0000 0000 0000 09),_AMQ_DUPL_ID=ID:de-adn-bfxjnc2-52228-1517219428026-1:1:9:1:3,__HDR_MESSAGE_ID=[0000 0050 6E00 017B 0100 2949 443A 6465 2D61 646E 2D62 6678 6A6E 6332 2D35 ... 0000 0000 0001 0000 0000 0
000 0009 0000 0000 0000 0003 0000 0000 0000 0000),__HDR_DROPPABLE=false]]#693556289 has reached maximum delivery attempts, sending it to Dead Letter Address DLQ from email
It looks to me as if my configuration is completely ignored.
Can anybody tell me, what is wron in my configuration?
Have a tcl expect connection to a telnetd and want to send the telnet BREAK.
There for the telnetd must be informend to go into command mode. So the IAC (255) hast to be sende.
After this BRK(243) has to be send.
I verifyed this with a putty->telnetd connection. Putty can send "BREAK". The network traffic shows what is expected 255/243
When I send ICA(255)/BRK(243) with tcl expect I see in the network traffic, that three bytes (255/255/243) were send.
I found out when I send i.e. 254 I see one byte
When I send 255. It is two bytes.
I expect 255, which is -1 or ff has someting special in expect.
How can I achive to get just 255 onto the wire?
fconfigure $channel -translation
exp_send -i $channel -- [binary format H4 FFF3]
This sends "ff ff f3" to the telnetd
As mention in the responce, yes, the language has to be taken into account. So I add fconfigure to have none
Here is my code:
package require Expect
spawn telnet localhost
fconfigure $spawn_id -translation binary
exp_send "[binary format H4 FFf3]"
after 2000
When I look on the wire with tcpdump -X -i localhost port 23 I see FFFFF3.
11:26:10.358187 IP localhost.44802 > localhost.telnet: Flags [P.], seq 129:132, ack 148, win 342, options [nop,nop,TS val 1826173122 ecr 1826168178], length 3
0x0000: 4510 0037 835b 4000 4006 b953 7f00 0001 E..7.[#.#..S....
0x0010: 7f00 0001 af02 0017 b004 9eec f815 49e3 ..............I.
0x0020: 8018 0156 fe2b 0000 0101 080a 6cd9 30c2 ...V.+......l.0.
0x0030: 6cd9 1d72 ffff f3
I look into telnet with strace and see:
203 27482 12:37:28 read(0, "\377\363", 8192) = 2
204 27482 12:37:28 select(4, [0 3], [3], [3], {0, 0}) = 1 (out [3], left {0, 0})
205 27482 12:37:28 sendto(3, "\377\377\363", 3, 0, NULL, 0) = 3
fff3 is received by expect, ff ff f3 is send to the telnetd.
I was totally wrong with my toughts.
Because I use "telnet" spawned within expect, I have to send "CTRL]" "send brk\r".
And everything is fine.
I am trying to stream a WebM format video being generated using gstreamer and individual frame being sent over websockets. A typical byte arrangement of webm file is like this (you may be already familiar with this).
EBML (head size: 12 bytes, data: 16 bytes, pos: 0, '0x0')
DocType (head size: 3 bytes, data: 5 bytes, pos: 12L, '0xcL') : 'webm\x00'
DocTypeVersion (head size: 3 bytes, data: 1 bytes, pos: 20L, '0x14L') : 2
DocTypeReadVersion (head size: 3 bytes, data: 1 bytes, pos: 24L, '0x18L') : 2
...
...
SegmentInfo (head size: 12 bytes, data: 91 bytes, pos: 192L, '0xc0L')
TimecodeScale (head size: 4 bytes, data: 3 bytes, pos: 204L, '0xccL') : 1000000
Duration (head size: 3 bytes, data: 8 bytes, pos: 211L, '0xd3L') : 0.0
MuxingApp (head size: 3 bytes, data: 31 bytes, pos: 222L, '0xdeL') : 'GStreamer plugin version 1.2.4\x00'
WritingApp (head size: 3 bytes, data: 25 bytes, pos: 256L, '0x100L') : 'GStreamer Matroska muxer\x00'
DateUTC (head size: 3 bytes, data: 8 bytes, pos: 284L, '0x11cL') : 447902803000000000L
Video (head size: 9 bytes, data: 8 bytes, pos: 295L, '0x127L')
Pixel Width (head size: 2 bytes, data: 2 bytes, pos: 351L, '0x15fL') : 640
Pixel Height (head size: 2 bytes, data: 2 bytes, pos: 355L, '0x163L') : 480
Codec Id (head size: 2 bytes, data: 6 bytes, pos: 359L, '0x167L') : 'V_VP8\x00'
Cluster (head size: 12 bytes, data: 72057594037927935L bytes, pos: 367L, '0x16fL')
TimeCode (head size: 2 bytes, data: 2 bytes, pos: 379L, '0x17bL') : 1514
SimpleBlock (head size: 4 bytes, data: 44618 bytes, pos: 383L, '0x17fL') : 'binary'
track number : 1, keyframe : True, invisible : 'no', discardable : 'no'
lace : 'no lacing', time code : 0, time code(absolute) : 1514
SimpleBlock (head size: 3 bytes, data: 793 bytes, pos: 45005L, '0xafcdL') : 'binary'
track number : 1, keyframe : False, invisible : 'no', discardable : 'no'
lace : 'no lacing', time code : 27, time code(absolute) : 1541
<<conitnued....>>
What I see, the absolute time code and relative time code is being written correctly when I redirect the gstreamer output to filesink. The same gstreamer pipeline is used to extract the byte sequences (samples). These samples are then transmitted over websocket and received on client side using MediaSource API.
My implementation of client javascript is described here.
When I run the client in Firefox, the video runs smoothly without any glitches. But on Chrome, the video freezes after some time or at the beginning.
I tried modifying sourceBuffer.mode = "sequence" or "segments", none of the options work on Chrome, whereas the video feed on Firefox is totally unaffected by any value of "sourceBuffer.mode". The description of these modes is here. (I am assuming that the MediaSource API works the same way on IE and Firefox, as no documentation available on Mozilla website).
Also, mediaSource.duration is Infinity/NaN in Chrome and Firefox both.
No matter which way I try, the live feed on chrome is not at all working, whereas Firefox displays smooth video. Any suggestions, why this could be happening?
UPDATES:
I upgraded to Chrome version 41 which gives more details on chrome://media-internals. The message that is shown is:
render_id: 23
player_id: 1
pipeline_state: kStopped
EVENT: WEBMEDIAPLAYER_DESTROYED
url: blob:http%3A//localhost%3A8080/172f68c8-9ff3-4983-9dcb- 396b3f843752
found_video_stream: true
video_codec_name: vp8
duration: unknown
video_dds: false
video_decoder: FFmpegVideoDecoder
error: Append: stream parsing failed. Data size=2283 append_window_start=0 append_window_end=inf
pipeline_error: pipeline: decode error
Timestamp Property Value
00:00:00 00 pipeline_state kCreated
00:00:00 00 EVENT PIPELINE_CREATED
00:00:00 00 EVENT WEBMEDIAPLAYER_CREATED
00:00:00 00 url blob:http%3A//localhost%3A8080/172f68c8-9ff3-4983-9dcb-396b3f843752
00:00:00 00 pipeline_state kInitDemuxer
00:00:01 687 found_video_stream true
00:00:01 692 video_codec_name vp8
00:00:01 692 duration unknown
00:00:01 692 pipeline_state kInitRenderer
00:00:01 694 video_dds false
00:00:01 694 video_decoder FFmpegVideoDecoder
00:00:01 695 pipeline_state kPlaying
00:00:10 989 EVENT PLAY
00:00:11 276 error Got a block with a timecode before the previous block.
00:00:11 276 error Append: stream parsing failed. Data size=2283 append_window_start=0 append_window_end=inf
00:00:11 276 pipeline_error pipeline: decode error
00:00:11 276 pipeline_state kStopping
00:00:11 277 pipeline_state kStopped
00:01:14 239 EVENT WEBMEDIAPLAYER_DESTROYED
How to fix or calculate the "append_window_end"???
I am inspecting decoder configuration record contained in .mp4 video file recorded from Android devices. Some devices have strange or incorrect parameters written in decoder configuration record.
Here is sample from Galaxy Player 4.0 which is incorrect:
DecoderConfigurationRecord: 010283f2ffe100086742000de90283f201000568ce010f20
pictureParameterSetNALUnits : 68ce010f20
AVCLevelIndication : 242
AVCProfileIndication : 2
sequenceParameterSetNALUnits : 6742000de90283f2
lengthSizeMinusOne : 3
configurationVersion : 1
profile_compatibility : 131
profile_idc : 103
constraint_set : 16
level_idc : 0
AVCLevelIndication == 242 is wrong because standard states 51 is the highest value.
AVCProfileIndication should be in (66, 77, 88, 100, 120, ..)
profile_compatibility is called constraint_set?_flags and 2 least significant bits are reserved and seposed to be equal to 0
This is how it should look like:
DecoderConfigurationRecord: 0142000dffe100086742000de90283f201000568ce010f20
pictureParameterSetNALUnits : 68ce010f20
AVCLevelIndication : 13
AVCProfileIndication : 66
sequenceParameterSetNALUnits : 6742000de90283f2
lengthSizeMinusOne : 3
configurationVersion : 1
profile_compatibility : 0
profile_idc : 103
constraint_set : 16
level_idc : 0
How can AVCLevelIndication and AVCProfileIndication be deduced from profile_idc and level_idc ?
Is there a way to check or possibly fix wrong parameters by comparing them to SPS parameters ?
level_idc is 10 * level. i.e. if you're using level 3.1, it will be 31.
profile_idc is specified in Annex A of ISO/IEC 14496-10. Baseline profile is 66, Main Profile is 77 and Extended Profile is 88 for example.
Additionally, you can see the syntax for the SPS RBSP and PPS RBSP in section 7.3.2.1 and 7.3.2.2 respectively. Note ue(x) and se(x) indicate unsigned exponential golomb coding and signed exponential golomb coding.
Edit: My apologies. The AVCProfileIndication and AVCLevelIndication should be the same as profile_idc and level_idc
i have an application which converts each character in my string to 3digit number, it seems to be something like ASCII but its not that, im trying to figure it out but i cant understand:
somefunction(){
a => 934 // a will be converted to 934
b => 933 // b will be converted to 933
1 => 950 // 1 will be converted to 950
0 => 951 // 0 will be converted to 951
}
i know ASCII but i don't understand this, please help if know what type of encoding type this is.
Thanks You :)
Here's one possibility (ord returns the ASCII value of the character), but I think you'd really need several more data points to know for certain.
>>> for c in 'ab10': print c, 999 - ord(c.upper())
...
a 934
b 933
1 950
0 951