Can I connect to my car's can bus with an elm327 interface? - obd-ii

I've been fiddeling around with a bluetooth elm327 device I bought a few months ago and am able to get standard obd infos like vin, rpm, speed etc.
But as I just read about recently obd2 and can are not the same. I've tried to sniff on my can bus with th AT MA command, but I get no response, so I guess the can network is decoupled from the obd2 interface. Is there any chance to get access to the can network? Or might I need a different device to do so?
Maybe this info helps: I have a 2011 Skoda.

On many modern vehicles there are actually multiple CAN buses controlling the numerous functions needed by the car. Some of these CAN buses are high-speed for important systems like engine control, and some are low-speed for less critical functions such as climate control (or in your case, diagnostics through the OBD2 port). These multiple CAN buses are usually interconnected through a gateway device in the car that arbitrates which CAN messages can be sent between buses. This is a safety net that prevents lower priority CAN buses from interfering with the more critical CAN buses.
In an example case, the CAN bus used for engine control may be able to communicate with the radio CAN bus so that the radio volume gets increased when the engine is revving to higher RPMs for comfort reasons. This would likely be a one-way connection though the gateway though, as it would be in the interest of safety to not allow the radio's CAN bus to send signals back to the engine (this could lead to potential problems if using aftermarket radios for example).
As a result of everything mentioned above, a connection to the OBD2 port's CAN lines most likely will not have full access to the complete CAN network on your car. One way to confirm this would be to look for the Factory Service Manual for your particular vehicle to see how the CAN bus(es) are setup for your car (there are actually quite a few cars that operate on only a single CAN bus in order to cut costs).
Keep in mind that as an alternative to using the OBD2 port, you can always tap directly into the CAN bus that you are interested in. For example, if you remove the radio from your car to expose the radio harness, you can usually tap directly into the CAN lines for the radio bus with the correct equipment.
Hope this helps!

If your vehicle uses the CAN protocol then you shold be able to issue atma from the elm327 device.
Here are the conditions I met to get an ATMA dump:
my vehicle supports protocol 6 -- iso 15765-4 can-11 (500 kbaud)
ATSP6 // I am using protocol 6, not auto mode
ATSH7E0 // now I am talking to the engine ECU
ATMA // returned a page full of data before getting a buffer full message

Related

TVR bits match TAC Online, but transaction does NOT go online?

I have a scenario where the EMV Contactless card image (American Express) SHOULD decline offline; however, the Ingenico PinPad is going online and approving and the VeriFone is declining offline.
Even though, this scenario SHOULD decline offline - I am convinced this scenario should go ONLINE. I think the VeriFone is a false-positive and the Ingenico is doing the right thing by going ONLINE.
The purpose of this scenario is to ensure that the terminal declines a transaction offline when CDA fails.
The card image has an IAC Denial of "0000000000" and IAC Online of "F470C49800".
The TVR that gets generated during 1AC is '0400008000'.
The TAC Denial is set to "0010000000" and the TAC Online is set to "DE00FC9800".
TVR = "0400008000"
IAC_Denial = "0000000000"
TAC_Denial = "0010000000"
IAC_Online = "F470C49800"
TAC_Online = "DE00FC9800"
When comparing the TVR to the TAC Denial (which should happen first) according to the EMV Book 3 - Terminal Action Analysis - there are NO matching bits. So the next thing that should happen is the TVR should be matched with the TAC Online. When comparing the bits from the TVR to the TAC Online - the bits that match are: "CDA Failed, Exceeds Floor Limit".
This indicates to me that this should go ONLINE; however, as previously stated the scenario is ensuring that it declines OFFLINE.
In a nutshell, the VeriFone PinPad is giving a false-positive by declining OFFLINE without using the Terminal Action Analysis logic.
However, the Ingenico seems to be doing the right thing by going ONLINE.
Is there something that I am missing?
Is there any configurations that can override the Terminal Action Analysis from matching the TVR to TACs to prevent a transaction to go online?
Could this be an issue with the VeriFone kernel?
Thanks.
I often got this error when my POS terminal was not properly configured.
Often, scenarios like this one will have thresholds to configure in your terminal accordingly to its standards. For instance, my terminal was configured accordingly to SEPA-FAST standards.
There was a threshold for the maximum amount value to approve offline. This is useful for merchants that want to approve small amounts offline for effectiveness and speed when they have long lines of customers to process. Think of a cafeteria or a bus line. Of course, this is slightly risky and many merchants won't approve high amounts without an online approval to reduce their loss due to invalid/fraudulent payments.
In my opinion, your offline threshold looks fine. The transaction amount exceeds it and it is refused offline for the obvious reasons I explained to you before.
Perhaps your maximum threshold is badly configured. Most scenarios require you to set a maximum amount threshold over which the transaction is refused offline.
One other thing that could be wrong is your EMV Terminal capabilities 0x9F33 that supports Online PIN authentication and shouldn't. Maybe you aren't using the terminal prescribed by the scenario. What is your CVM? Should it be supported by your terminal? There is also the EMV Terminal Transaction Qualifiers (TTQ) field 0x0F66 for NFC transactions that plays a similar role in defining what a terminal can and cannot do. Maybe your terminal should be offline only in this scenario. This could happen for pizza deliveries or in situations where an internet connexion is not available.

Scaling a Server Horizontally

I have a question about scalability. Let's say I have a multiplayer game, such as Uno, where the server handles everything. (Assume this is a text-only game for simplicity). For example, to get information printed out to the user in the client, the server might send PRINT string, or CHOOSE data (to pick a card to play), etc. In this regard, the client is "dumb" and the server handles the game logic.
A quick example of how this might work on a protocol level:
Server sends: PRINT Choose a card
Server sends: CHOOSE Red 1,Blue 1 (user shown a button or something, and picks Red 1)
Client sends: Red 1
Let's say I have this architecture:
Player Class: stores the cards the user has, maybe some methods (such as tellData(String data) which would send PRINT data, sendPM() which could private message a user)
Server Class: handles authentication, allows users to create new games, shows users a list of games they can join
Game Class: handles users playing a card, handles switching to a new player for his or her turn, calls methods on player class like tellData(), pickCard(), etc
How would I scale this, to run the server on multiple computers? Right now, all of the users connect to one server, and require the Player, Server, and Game class to interact with each other. If someone could provide some suggestions, and/or point me to some good resources/books on this, it would be greatly appreciated (no, this is not a homework assignment or something for a business, this is just a personal project and curiosity of mine). In terms of scalability, I'd like to just be able to add another server, and handle the additional load of players--but the most concurrent connections would be 1000.
Also, would this become significantly more difficult of a scalability challenge if we added in more games?
Furthermore, what is the best way to store game data? In a SQL database, or serializing objects, or what? By this, I mean let's say 3 users are in a game of Uno, and want to return to it later. I don't want to store their cards and information about the game in the Player/Server/Game class (RAM) forever - I want to dump this somewhere, so when the user logs in, the info can be loaded from however this was dumped into RAM, and then the appropriate Player/Game objects.
Finally, how can I make changes to the server without having to kill it, and restart it? Assume the server was written in Java or Python.
If anyone can provide suggestions or some resources it would be greatly appreciated - this includes changing the architecture I originally stated.
Thanks for any and all help!!
EDIT: Are there any good books or talks you all would recommend on the subject?
1.Scalability:
Involves an application architecture there across multiple server instances the session is replicated/shared and load balanced. You can choose to implement a message queue (rabbitmq) / ESB(enterprise service bus) architecture for your app.
2.Ease of scaling:
Depends on deployment and the servers you choose.
3.Pesistance:
Game for a person involves his particular game state at any point of time. If you could represent state information semantically you can have the data in markup savefiles, or store the state information directly into a DB.
Else, you may need to serialize objects and store them on filesystem / as a BLOB in DB in case the state space is humongous.
4.Hot deployment:
JVM mostly always will need a restart to reload class files, hence on java server side you will always need to restart. In Ruby/Rails is certain parts of the application can be hot deployed. If your need 100% hot deployability, perhaps Erlang is the answer.
To improve concurrency you can also use evented server/app architectures: thin/eventmachine for ruby or apache mina, jboss netty for java.

Two peers RTMFP Chat: should I use NetGroup or not?

I made a Chat mainly inspired by the Cirrus Sample
Chat works fine, but on some cases, the "NetStream.Connect.Success" doesn't get triggered.
Both connections pass the ports check
Before switching to a NetGroup architecture and presuming that these problems are related to the connection process, I'd like to know :
- Would NetGroup act differently on NetStream connection process over a NetStream Direct Connection ?
- What are the limitations of NetGroup ? I read there was more latency when using it.
ok, so first and foremost, NetStream.Connect.Success fires on the NetConnection and not the NetStream. That is the biggest misconception and frustration for people trying to get this all set up. Adobe did this for legacy (historical) reasons. So check that first to make sure you are listening in the proper spot.
if you are sure you have the listener in the right spot, you may have NAT or firewall comms issues that prevent one peer from seeing the other in certain circumstances.
Now regarding groups:
NetGroup does not introduce latency (necessarily). In groups with less than 14 members, you have a full mesh (all members have a direct peer connection to all others). using a less than 14 member group will still net you a blazingly fast p2p connection provided you use sendToAllNeighbors(). Where you hear about latency is regarding post(). post runs a bunch of stuff that introduces new latency since it tries to contact my 3 descending, 3 ascending, my fractional connections, my 6 least latent and my 1 random every 10 seconds... and then tries to forward the message on to be distributed to the rest of the group. Even in small groups this can take a second or two.
Here is a link to a video from MAX that goes through all the nitty gritty (so to speak) on rtmfp and its ring based architecture Cool In-Depth Video About RTMFP

How to force a multihopping topology with xbee zb?

I use some xbee (s2) modules with zb stack for mesh networking evaluation. Therefore a multi hopping environment has to be created. The problem is, that the firmware handles the association for themselves and there is no way deeper into the stack as the api provides. To force the path of the data, without to disturb the routing mechanism, I have tried to measure, I had to put them outside their reach. To get only the next hop in association isn't that easy. I used the least power level of the output, but the distance for the test setup is to large and the rf characteristics of the environment change undetermined.
Therefore my question, has anyone experience with this issue?
Regards, Toby
I don't think it's possible through software and coordinator/routers. You could change the Node Join Time (ATNJ) to force a new router to join through a particular router (disable Node Join on all nodes except one), but that would only affect joining. Once joined to the network, the router will discover that other nodes are within range.
You could possibly do it with sleepy end devices. You can use the ATNJ trick to force an end device to join through a single router, and it will always send its messages to that router. But you won't get that many hops -- end device sends to its parent router, which sends to the target's parent router, which sends to the target end device.
You'll likely need to physically limit the range of the radios to force hopping, as demonstrated in the video you linked of Digi's K-Node test equipment with a network of over 1000 radios. They're putting the radios in RF-shielded boxes and using wired antenna connections with software-controlled attenuators to connect the modules to each other.
If you have XBee modules with the U.fl or RPSMA connector, and don't connect an antenna, it should significantly reduce the range of the module. Otherwise, with a wire whip or integrated PCB antenna, you need to put each radio in some sort of box that attenuates the signal. Perhaps someone else can offer advice on materials that will reduce the signal's range without completely blocking it.
ZigBee nodes try to automatically form an Ad-Hoc network. That is why they join the network with the strongest connection (best network coverage) available on that moment. These modules are designed in such a way, that you do not have to care much about establishing a reliable communication. They will solve networking problems most of the time.
What you want to do, is somehow force a different situation. You want to create a specific topology, in order to get some multi-hopping. That will not be the normal behavior of the nods. But you can still get what you want with some of the AT Commands.
The mentioned command "NJ" should work for you. This command locks joins after a certain time (in seconds). Let us think of a simple ZigBee network with three nodes: one Coordinator, one Router and one End-Device. Switch on the Coordinator with "NJ" set to, let us say, two minutes. Then quickly switch on the Router, so it can associate with the Coordinator within these two minutes. After these two minutes, the Coordinator will be locked and will not accept more joins. At that moment you can start the End-Device, which will have to associate with the Router necessarily. This way, you will see that messages between End-Device and Coordinator go through the Router, as you wanted.
You may get a bigger network applying this idea several times, without needing to play with the module's antennas. You can control the AT Parameters remotely (i.e. from a Computer connected to the Coordinator), so you can use some code to help you initialize the network.

Can someone explain an Enterprise Service Bus to me in non-buzzspeak?

Some of our partners are telling us that our software needs to interact with an Enterprise Service Bus. After researching this a bit, my instinct is to say that this is just buzz speak for saying that we need to have a platform-indpendent way to pass messages back and forth. I'm just trying to get a feel for what our partners are telling us. Am I correct in dismissing our partners' request as just trying to get our software to be more buzzword-compliant, or are they telling us something we should listen to (even if encoded in buzzspeak)?
Although ESB is based on messaging, it is not "just" messaging and not just a buzzword.
So if you start with plain old async messaging, the early networks tended to be very point-to-point. You had to wire up (i.e. configure through some admin interface) each connection and each pair of destinations and if you dared to move anything around invariably something broke. Because the connection points were wired by hand these networks never achieved high connection density. The incremental cost was too high and did not scale. There was also a lot of access control and policy embedded in the topology. The lack of connection density actually favors this approach to security, even though it inhibits flexibility.
The ESB attempts to address these issues with...
Run-time resolution of destinations/services/resources
Location transparency
Any-to-any connectivity and maximum connection density
Architected for redundancy, horizontal scalability, failover
Policy, access control, rules externalized from topology
Logical messaging network layer implemented atop the physical messaging network layer
Common namespace
So when your customer asks for ESB compatibility, they want things like the above. From an application standpoint, this also implies...
Avoiding message affinities such as requirements to process in strict sequence or to address requests only to specific nodes instead of to a generic network destination
Ability to resolve destinations dynamically at run time (i.e. add another instance of a queue and it automatically starts getting traffic, delete one and traffic routes to the remaining nodes)
Requestor and provider apps decoupled from knowing where each other "lives". Requestor makes one connection, regardless of how many services it might need to call
Authorize by policy rather than by topology
Service provider apps able to recognize and handle dupes (as per JMS spec, see "functional duplicate" due to session handling)
Ability to run multiple active instances of a service provider application
Instrument the service provider applications so you can inquire on the status of the network or perform a test without sending an actual transaction
On the other hand, if your client cannot articulate these things then they may just want to be able to check a box that says "works with the ESB."
I'll try & keep it buzzword free (but a buzz acronym may creep in).
When services/applications/mainframes/etc... want to integrate (so send messages to each other) you can end up with quite a mess. An ESB hides that mess inside of itself (or itselves) so that an organisation can pretend that there isn't a mess and that it has something manageable. It then wraps a whole load of features around this to make this box even more enticing to the senior people in an organisation who'll make the decision to buy such an expensive product. These people typically will want to introduce a large initiative which costs a lot of money to prove that they are 'doing something' and know how to spend large amounts of money. If this is an SOA initiative then vendors various will have told them that an ESB is required to make the vendors vision of what SOA is work (typically once the number of services which they might want passes a trivial number).
So an ESB is:
A vehicle for vendors to make lots of money;
A vehicle for consultants to make lots of money;
A way for senior executives (IT Directors & the like) to show they can spend lots of money;
A box to hide a mess in;
A total PITA for a technical team to work with.
After researching this a bit, my
instinct is to say that this is just
buzz speak for saying that we need to
have a platform-indpendent way to pass
messages back and forth
You are correct, partially because the term ESB is always nice word that fits well with another buzzword, legitimate or not - which is governance (i.e. helps you manage who is accessing your endpoints and reporting metrics - Metrics btw is what all the suits like to see, so that may be a contributor)
Another reason they might want a platform neutral device is so that any services they consume are always exposed as endpoints from a central location, instead of a specific machine resource. The ESB makes the actual physical endpoints of your services irrelevant to them, which they shouldn't care much about anyway, but that enables you to move services around however they will only consume the ESB Endpoint.
Apart from a centralized repository for Discovery, an ESB also makes side by side versioning of services easier. If I had a choice and my company had the budget, we would have purchased IBM's x150 appliance :(
Thirdly, a lot of more advanced buses, like SoftwareAG's product if I recall, is natively able to expose legacy data, like from data sitting on main frames as services without the need for coding via adapters
I don't know if their intent is to leverage all the benefits an ESB provides, or as you said, make it buzzword compliant.
After researching this a bit, my instinct is to say that this is just buzz speak for saying that we need to have a platform-indpendent way to pass messages back and forth
That's about right. Sometimes an ESB will go a little bit further and include additional features like message delivery guarantees, confirmation/acknowledgement messages, and so on. The presence of an ESB also usually explicitly or implicitly creates a new protocol where none previously existed, which is another important consideration. (That is, some sort of standard or interface has to be set regarding the format of the messages.)
Am I correct in dismissing our partners' request as just trying to get our software to be more buzzword-compliant, or are they telling us something we should listen to (even if encoded in buzzspeak)?
You should always listen to your customers, even if it initially sounds silly. It's usually worth at least spending the effort to decide what's going on. Reading between the lines, what your partners probably mean is that they want a way for your service to integrate more easily with their own services and products.
An enterprise service bus handles the messaging between systems in a standard way. This allows you to communicate with the bus in the same exact way across all your platforms and the bus handles the actual translating to individual communication mechanism needed for the specific endpoint. This means you write all your code to talk to the bus using a common messaging scheme and the bus handles taking your common scheme and translating it so the endpoint understands it.
The simplest explanation is to explain what it provides:
For many years companies acquired different platforms and technologies to achieve specific functions in their business from Finance to HR. These systems needed to talk to each other to share data so middleware became the glue that allowed them to connect. Before the business knew it, they were paying for support and maint on each of these systems and the middleware. As needs in the business changed departments decided to create their own custom solutions to address special needs rather than try to make the aging solutions flexible enough to meet their needs. Before they knew it, they were paying to support and maintain the legacy systems, middleware, and custom solutions. With new laws like Sarbanes Oxley, companies need to have better information available for reporting purposes. A single view requires that they capture data from all of the systems. In addition, CIOs are now being pressured to lower costs and increase customer service. One obvious solution is the eliminate redudant systems, expensive support and maint contracts, and high cost legacy solutions which require specialists to support. Moving to a new platform allows for this, but there needs to be a transition. There are no turnkey solutions that can replicate what the business does. To address the needs for moving information around they go with SOA because it allows for information access through a generic entity. If I ask for AllEmployees from the service bus it gets them whether it is from 15 HR systems or 1. When the 15 HR systems becomes 1 system the call and result does not change, just how it was done behind the scenes. The Service Bus concept standardizes the flow of information and allows IT managers to conduct transitions behind the bus with no long term effect on upstream users.