BackgroundTransferRequest status WaitingForWiFi when TransferPreferences is set to AllowCellularAndBattery - windows-phone-8

When I create a BackgroundTransferRequest, with TransferPeferences set to AllowCellularAndBattery, I sometimes still get a TransferStatus set to WaitingForWiFi. Is there any way to force the transfer to occur over 3G? This is very weird, as sometimes I start 4 transfers, and 2 of them will be fine, and the other 2 will get into the WaitingForWifi state.

I think the problem may be with file size - see Background Transfer Policies - there are limitations for transfer by 3G. When the file is too large TransferPreferences automaticallly changes.
http://msdn.microsoft.com/en-us/library/windowsphone/develop/hh202955(v=vs.105).aspx
For now it seems to be no way to force transmission over 3G if policies are not ok.

Related

Looking for an example of a OBD-II complete data frame

I'm developing an OBD-II reader where I want to query requests to read PID parameters with a stm32 processor. I already understand what should go on the data field, but the ID is giving me a headache. As I have read, one must send 0x7DF to broadcast a request, and each ECU will respond with his own ID. However, I have been asked to do this within the SAE J1939 protocol, which uses the 29 bit extended identifier, and I don't know what I need to add to this ID.
As I stated in the title, could someone show me some actual data from a bus using this method? I've been searching on the internet for real frames but did not have any luck so far.
I woud also appreciate if someone could shred some light to if the OBD-II communication needs some acknowledgment to work properly.
Thanks
I would suggest you to take a look on the SAE J1939 documentation, in the more specifically on the J1939/21,J1939-71 and J1939/73.
Generally, a J1939 transport protocol response sequence can be processed as follows:
Identify the BAM frame, indicating a new sequence being initiated
(via the PGN 60416 - 0xEC00 can be reach by 0x1CECFF00 )
Extract the J1939 PGN from bytes 6-8 of the BAM payload to use as the
identifier of the new frame
Construct the new data payload by concatenating bytes 2-8 of the data
transfer frames (i.e. excl. the 1st byte)
A J1939 data transfer messages with ID 1CEBFF00 (PGN 60160 or EB00).
Above, the last 3 bytes of the BAM equal E3FE00. When reordered, these equal the PGN FEE3 aka Engine Configuration 1 (EC1). Further, the payload is found by combining the the first 39 bytes across the 6 data transfer packets/fram
The administrative control device or any device issuing the vehicle use status PID should be sensitive to the run switch status (SPN 3046 - 0xFDC0 which probably can be reach by 0xCFDC000) and any other locally defined criteria for authorized use (i.e., driver log-ons) before the vehicle use status PID is used to generate an unauthorized use alarm.
Also, you can't forget to uses a read/send to extend ID message, since that is a 24-bit.
In fact, i will suggest you to use can-utils to make your a analyses even easier. A simple can-dump or can-sniffer you can see what is coming on your broadcast.
Some car's dbc https://github.com/commaai/opendbc

Merge Json after several splits in Nifi

I splitted my Json several times to avoid OOM errors. I've put a Wait processor to wait for all my records to the use a Merge content. Each FF has been assigned an attribute of the original file number of lines.
The wait processor should put the FF in wait until the notify increases the counter to the total number of lines.
However It seems that my Wait processor is not putting my FF in the Wait queue(it is not shown but there is).
Is there anything wrong in this peace of flow?
You can do multiple merges by using UpdateAttribute after each Split to save the fragment.* attributes as something different, perhaps fragment1.*, fragment2.*, etc. Then you can restore each of them in reverse order with UpdateAttribute before each Merge, setting fragment.* to the fragment2.* attributes, then MergeContent, then set fragment.* to the fragment1.* attributes, then MergeContent, and so on.

Why amqsput command is dividing my data and push to the message queue?

While testing my application, I tried with a string taking space of some 200KB. But amqsput divided my request in multiple chunks. I am not sure why it's happening. If I reduce the size to some 100KB then it works fine.
I am using following command to push data into the message queue:
amqsput MESSAGE_QUEUE MQM < /home/usr/sampleRequest.xml
This sampleRequest.xml contains an XML formatted as one line. I don't know much about MQ admins/configuration and want an idea what's wrong.
Why it's dividing my data and push it to queue when file size is greater than a certain value.
amqsput & amqsget are simple applications for putting and getting small messages to and from a queue. If you look at the code for amqsput (i.e. amqsput0.c), you will see that the buffer size used is 65535 (64KB).
There are lots of programs that are better suited for your type of testing. There is a long list of C sample MQ applications here. The 2 that you might want to use are file2msg and msg2file. There is also Paul Clarke's QLoad program (it used to be SupportPac).

Storm dprc thrift7.transport.TTransportException: Frame size (1213486160) larger than max length (1048576)!

I use storm 0.10.0 deploy DRPCTopology to storm cluster, but have TTransportException.
The code is:
DRPCClient client = new DRPCClient(map, "10.10.5.92", 3774, 5000);
System.out.println(client.execute("match-drpc", "cat"));
The error is:
Exception in thread "main" org.apache.thrift7.transport.TTransportException: Frame size (1213486160) larger than max length (1048576)!
at org.apache.thrift7.transport.TFramedTransport.readFrame(TFramedTransport.java:137)
at org.apache.thrift7.transport.TFramedTransport.read(TFramedTransport.java:101)
at org.apache.thrift7.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift7.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
at org.apache.thrift7.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
at org.apache.thrift7.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
at org.apache.thrift7.TServiceClient.receiveBase(TServiceClient.java:69)
at backtype.storm.generated.DistributedRPC$Client.recv_execute(DistributedRPC.java:106)
at backtype.storm.generated.DistributedRPC$Client.execute(DistributedRPC.java:92)
at backtype.storm.utils.DRPCClient.execute(DRPCClient.java:59)
1213486160 is not an actual packet length. It is ASCII "HTTP" interpreted as a 32-bit big-endian integer. Your "DRPCClient" is not speaking the protocol you expect, but is instead a web server.
You need to increase nimbus.thrift.max_buffer_size in your storm.yaml file. Afterwards, restart the cluster (otherwise, the new value is not considered).
STORM-1469 is related to this problem, however its pull requests are not all merged, so the default transport plugin is still the old one (SimpleTransportPlugin).
Adding the following config fixed the problem in Storm v 1.0.2 for me (should work for 0.10.x as well).
storm.thrift.transport: "org.apache.storm.security.auth.plain.PlainSaslTransportPlugin"

Couchbase: 1 MB size exceeded for value in index view

I see that value in view has exceeded 1 MB size (it is 1.7 MB) and thereby not getting emitted in views. I have tried to change values of max_kv_size_per_doc in default.ini (then restarted couchbase) but still value is not getting emitted.
Could someone please suggest workaround?
Yes: dont emit documents in views - its not a recommended practice. In fact, by emitting docs in views you're creating a copy of the doc on your storage (== bad). 100 docs + view that returns them = 200 docs's space.
Instead, emit keys for docs and retrieve them after you get the results from the view. or just emit the part of the doc that you need (if its smallish).
edit: i'm guessing you havn't tried the "include_docs" options? it should attach the complete doc to your results without creating a duplicate.