Find out crash from OBD (On Board Diagnostic) data? - obd-ii

In our application we are collecting OBD data continuously from vehicles port. Our requirement is to find out the vehicle crash scenario (accidents etc.). Currently we are reading the OBD parameters speed, temperature, rpm etc. Is it possible to identify the vehicle accidents scenario from these parameters or we need to use more parameter.
Please share your knowledge thanks in advance.

I think you can check for the sudden deviation in speed. We can get the speed and rpm information from the vehicle.

Related

Google Cloud SQL Timeseries Statistics

I have a massive table that records events happening on our website. It has tens of millions of rows.
I've already tried adding indexing and other optimizations.
However, it's still very taxing on our server (even though we have quite a powerful one) and takes 20 seconds on some large graph/chart queries. So long in fact that our daemon intervenes to kill the queries often.
Currently we have a Google Compute instance on the frontend and a Google SQL instance on the backend.
So my question is this - is there some better way of storing an querying time series data using the Google Cloud?
I mean, do they have some specialist server or storage engine?
I need something I can connect to my php application.
Elasticsearch is awesome for time series data.
You can run it on compute engine, or they have a hosted version.
It is accessed via an HTTP JSON API, and there are several PHP clients (although I tend to make the API calls directly as i find it better to understand their query language that way).
https://www.elastic.co
They also have an automated graphing interface for time series data. It's called Kibana.
Enjoy!!
Update: I missed the important part of the question "using the Google Cloud?" My answer does not use any specialized GC services or infrastructure.
I have used ElasticSearch for storing events and profiling information from a web site. I even wrote a statsd backend storing stat information in elasticsearch.
After elasticsearch changed kibana from 3 to 4, I found the interface extremely bad for looking at stats. You can only chart 1 metric from each query, so if you want to chart time, average time, and avg time of 90% you must do 3 queries, instead of 1 that returns 3 values. (the same issue existing in 3, just version 4 looked more ugly and was more confusing to my users)
My recommendation is to choose a Time Series Database that is supported by graphana - a time series charting front end. OpenTSDB stores information in a hadoop-like format, so it will be able to scale out massively. Most of the others store events similar to row-based information.
For capturing statistics, you can either use statsd or reimann (or reimann and then statsd). Reimann can add alerting and monitoring before events are sent to your stats database, statsd merely collates, averages, and flushes stats to a DB.
http://docs.grafana.org/
https://github.com/markkimsal/statsd-elasticsearch-backend
https://github.com/etsy/statsd
http://riemann.io/

Best Approach to Storing Mean Uptime Data

We have 500+ remote locations. Each location has a linux router which checks in to our management system (homemade using RoR3) every 15 minutes.
We need to log and calculate mean uptime of each boxes Internet connectivity.
Each router posts a request every 15 minutes to a script on the server. (Currently this just records the last checkin time and the uptime.)
If we want to plot the historical uptime of each box, what is the most efficient way to do this without clogging our db up.
500 boxes checking in every 15 minutes would (according to my calculations) result in 17,520,000 inserts. Quite a hefty amount of data that I don't think we need.
Could anyone help solve this riddle for us?
Why not take a look at RRDTool (Wiki-entry). It's just the tool for this kind of situation.
It works as a sort of a round-robin self-averaging database, and it's used in many logging applications just for similar purposes to your situation.
As an example take a look at Cacti which is a data-logging / network monitoring and graphing front-end app built around RRDTool (implemented in PHP).

DBMS to use for failover

We are developing a payment system for venues.
Now we're considering what DBMS to use for our application.
At the moment we're using microsoft's sql express to store our data.
But because the system is going to be used on very busy venues, we think we need a failover system for the case the database server goes down.
We have been looking at using mssql server for replication, but that is going to cost a lot of money for a case that is (hopefully) never going to happen.
The database has to be up for only a couple of hours (duration of venue) to max a couple of days.
But if the database is down for 30 minutes, noone can order drinks or get access to the venue. And with thousands of people in the venue, thats going to cause a lot of trouble.
So... can anyone share some of his expretise and/or share some reads for me about failovers, replication or anything?
Thanks in advance
I would suggest that you forget about the DBMS until you have a clear business plan, systems architecture and defined goals for availability. Ideally, find someone who has implemented a system like this before and hire them or use them as a consultant.
You also have to look into how much money you will lose if the system goes down: actual sales, loss of reputation and future business, contractual penalty clauses etc. Compared to those costs, adding a second server might start looking fairly cheap. But where will they be hosted, how does connectivity work, who does Operations etc.? Having a fully redundant failover database cluster will not help you if your one and only Wifi base station for the POS terminals suddenly dies.
Perhaps you have already considered and answered all these questions, and if so it would be helpful to add some more details to your question about the main constraints and requirements you have.

Save serial data to web server database

I am new to this, so any thoughts are much welcomed. :)
What I am trying to do is to read serial data via an RS232 cable going into the COM1 of a laptop and then saving this data into a web server database of some kind. I think MySQL is the way to go as to store my database. However, I don't see much documentation on how I can automate streaming in the serial data into the database. I only found this webpage that says it is possible. Any thoughts? Pointers to tutorials and/or reference?
Thanks.
MySQL is a relational database. Is the data you read on the serial port relational? From your usage of words, I doubt it.
If it is some kind of measurement data you need to store for a specific interval, the "Round Robin Database" might be a better choice. It even offers the option of storing old data with less resolution using less disk space.
If you insist on using mysql, you probably want to collect the data for a while, and save a standard sized chunk as a "binary large object" along with a timestamp.
A few questions come to mind - are you able to develop and install software solution or you want to create this with off the shelf tools?
If you are allowed to install custom software - reading from RS-232 and connecting to mysql is really simple with C# so the whole program will be less then a hundred lines of code. You just read the stream and from time to time insert it into the table with strucure like that id,datetime,TEXT. Depending on the nature of the stream you can insert on number of bytes/time elapsed or some logical condition.

How do applications collect statistics?

I need to collect statistics from my server application written in python. I am looking for some general guidance on how to setup models and exactly how to store the statistics information. I was thinking of storing and organizing all this information in a database, but my implementation is turning out to be too specific.
I need to collect stats like active users, requests processed and things like that over time.
Are there any guides or techniques out there to create some more generic statistics storage systems?
Like most software solutions there is no single solution that I can recommend that will solve your problem. But I have created a few similar programs and here's some things that I found that worked well.
Create an asynchronous logging service so the logging doesn't adversely affect your code's performance. (You need to be mindful of where you are storing your data, where it is processed, etc. because you can still significantly degrade performance if you're not careful.) I have found that creating a web service is often convenient.
Try and save as much information about the request as possible. In the future this will make it easier to add new queries and reports.
Normalize your data
Always include the time the action was performed. If you can capture run time that it typically useful too.
One approach is to do this by stages: Store activity logs, including requests and users, as text files. Later, mine the logs into data points (python should be able to do this easily). You may want to use the logging library for python for the logging stage. In general, start with high time-resolution logging, which you can later aggregate into hourly, daily, weekly summaries etc.