How does one deal with multiple TimeZones in applications that store dates and times? - mysql

I realize this is a bit subjective, but I'm hoping to pick everyones brain here on how they deal with multiple timezones? There are a variety of similar questions here and an equally wide variety of accepted answers.
How have you delt with this in apps you've built, and what issues did you have to overcome?

You always store date/time in one time zone(9 out of 10 it is London time) and convert it on display to the timezone of your user. This is strictly application level issue not the db.

At work we manage several clocks at once, not only time zones but also some more esoteric clocks used for spacecraft navigation.
The only things that really matter are: that you are consistent with ONE AND ONLY ONE CLOCK, whichever you pick, and that you have the APPROPRIATE CLOCK CONVERSIONS for when you need a VIEW OF TIME in a CLOCK DIFFERENT FROM YOUR ONE AND ONLY ONE CLOCK.
So:
One and Only One Clock: Pick the simplest one which will solve your problem, most likely this will be UTC (what some people would -incorrectly- call Greenwich, but the point remains: the zero line).
Appropriate clock conversions: This depends on your application, but you need to ask and answer the following 2 questions: How much resolution do I need? Do I need to make sure I account for leap seconds? Once you answered, you may be able to pick standard libraries or more estoreric ones. Again, you must ask these questions.
View of time: when someone picks a view of time (say, Pacific Time) simply call the appropriate clock conversion on demand.
Really, that's it.
As for libraries, I use Python for scripting, but NAIF Spice Library for mission design, and in-house code for spacecraft navigation. The difference between them is simply the resolution and the reliability on having accounted for everything you need to account for (Earth rotation, Relativity, time dilation, leap seconds, etc.) Of course, you will pick the library that fits your needs.
Good luck.
Edit:
I forgot to mention: don't try to implement your own time management library - use an off the shelve one. If you try, you may be successful, but your real project will die and you will only have one average date time library to show for it. Maybe I'm being exaggerated, but making a solid, general purpose date time library is far from trivial, i.e., it is a project in itself.

One of the projects i'm working on i'm using SQL Server 2005 and GETUTCDATE() to store the date in UTC format.

You need to maintain the user timezone information and all time zone information.
And always store the time in UTC.And when you want to display that information to particular user then just access this user's timezone(which is stored for this user) and add or subtract that amount of time from the time stored in UTC and display this.That will be in the timezone of the user who is viewing the information.

I think the new best practice with sql server 2008 is to always use the datetimeoffset datatype. normalizing dates to UTC is also good practices; but not always feasible or desirable.
See this blog post for more thoughts:
Link

+1 for #kubal5003.
Display of dates and times is always complicated by culture and time zone, so its always best to use the layer closest to the user (e.g. the browser or local application) to do this. It also moves some of the load from the database to the user's machine.
There is an exception for server-generated reports though. So I store the time zone name/ID (occasionally just the offset/bias) to find the start of day. This may be system-wide or on a per client/brand basis.
For web applications I usually detect a user's default time zone via geolocation (this is rarely wrong since geo data is quite accurate now).

Related

Better to store miscellaneous metadata in database or calculate on each access

I have a number of attributes I need for various page loads and other backend tasks, and I'm debating on whether storing these things in a database or calculating them on the fly.
For instance, if there are files that users can upload, and you want to track the size, space taken, format, etc. would it be better to calculate these things once and store them along with the location of the file in the database, or just grab the file each time and get the file attributes manually?
Another use case is shopping cart items. Is it better to calculate the price of an item and store that in a row with the shopping cart table, or calculate the given price each time a page loads. In this case, changes to the price based on site-wide sales, discounts, markups, etc. would not be reflected once the item has been added to the cart unless the prices are updated through another method when sales/discounts/markups are applied. This isn't the best example, but hopefully you understand the idea; maybe you have a better example.
In both of these examples, the source material is available to get the answers from which is key to the question. Obviously, one has a lot overhead for every page load would could be a lot depending on the situation, however the other seems to have less dependence on database integrity in terms of making sure it is always accurate and up-to-date (which I think I prefer). I'm not looking for a specific answer here, because I'm sure it will depend on many variables, but I am looking for a best practice or a method to determine the best solution.
NOTE: This is a similar question but has gotten very little response and no answers.
There can be trade offs between
When a user is waiting, elapsed time is critical.
A user will expect up-to-the-second pricing information.
A user will be frustrated if he orders something, only to find out that you are out-of-stock before his order is submitted.
We cannot say how fast it will be to recalculate things in your system. If you have some SQL code, we can help you speed it up.
Sometimes in Computer Science, it is faster to do the computations when the thing changes; sometimes it is faster to compute when it is requested. It sounds like your case is that it is also slower to compute when it is requested. If it is not "too slow", then that option may be viable.

How to manage or convert extremely large program

I have a Microsoft Access business critical database that was originally created in the 90's and has been enlarged and upgraded up to Access 2007 at this point. We have been using this database as a front end for a custom written ERP system essentially. We have moved most of the data over to an SQL server long ago, but we are still using MS Access as a front end. AS the project grows, we have a full time developer, we have started having stability problems and extremely frequent crashes of unknown causes.
As an example: 1 time out of 10 or so, a certain form will crash if I change the data in 1 specific field. There is no code firing at the time, the data is in a local temporary table that typically only has 5 rows most of the time. If I change the data in the table nothing goes wrong, but if I change it on the form Access will hard crash and dump me to the desktop. There are other examples I could provide of unexplained crashes
I am looking for advice on where to go at this point -- the access front end has all of the business logic for running our company essentially so I can't just abandon it. Ideally we would re-write the entire front end in some other language. The problem is that as a small company we don't have the resources to re-write the entire system in anything resembling a good time frame, and don't have the cash flow to pay someone else to do it. My ideal solution would be a conversion of some sort from the access front end to another end point -- whether web or local windows -- but my searches here and on google make that seem like a non-starter.
So essentially every avenue I look at seems to be a dead-end:
We can't find the source of the crashes to stabilize our current system,
We can't stop production in our current system for as long as it would take to re-write it,
We can't afford to pay someone else to write a new system,
Automated conversion tools seem like a waste of money
Are there other options or which of the options that I have thought of seems best?
We have an enterprise level program with an Access front end and an SQL Server back end. I wonder if it might not help to split the program up into different pieces for diagnostics. For instance if you have Order Entry and Inventory Management you could have one front end for each function. (Yes I can hear the howling in the background but if it was only for the purpose of diagnosis maybe it would help... )
You can also export the Access Database objects to text files and then import them into a fresh new database to get rid of weird errors in some occasions.
Well, I guess I will rephrase the basic issue here. If you have two Computing Science graduates on staff then they should have long ago anticipated that they reached the limits of Access. I fail to see this as any different than an overworked, oven in a restaurant that now have too many customers or a delivery truck that does not have the capacity to deliver goods to customers.
Since funds don't exist to re-write then your staff failed to put aside funds on a monthly bases to deal with this situation and now your choices as a result of this delayed action to deal with this growing problem places you in a difficult situation.
The computing science people you have on staff should have long ago seen this wall and limit you hit coming. In my experience is with most CS people is they consider Access rather limited, and thus even MORE alarm bells should have been ringing here and this means even less excuse exists for you to be placed in this unfortunate predicament.
So, assuming the computing science staff you have are well maintaining this application (and I graciously accept this is the case), then then a logical conclusion is this application has reached or exceeded the limits of Access. As noted such limits should have been long ago anticipated.
As you well point out that funds now do not exist for a re-write, then few choices exist without such funds.
However, you case may not be so bad since we NOW know you have experienced developers on staff. Given this case, then my suggestion would be to consider breaking out modules or small manageable parts and features one at a time from the application and having your well trained and experienced developers build either a web interface, or perhaps even using something like .net if you wish to stay 100% desktop. So this "window" of opportunity is a great chance to consider a change in architecture)
Since the data limits are SQL server and NOT Access, then both applications (existing access front end) and the new parts can BOTH well and easy operate on the same data. As you do this, you then break out and remove the existing parts from the Access application. This would suggest you eventaully return to accetable stability in the Access applicaiton. At that point , you could continue, or stop to save funds.
As noted, without funds to re-write, then the only choice is to find some means to free up SOME limited resources on a monthly basis to solve this problem.
At the end of the day, the solution to this problem is more resources, but without such resources then few technical choices and options exist here. Based on the information given so far YOU have made it clear you don't have resources here. However, the solution to this problem here requires resource allocation and planning.
In other words a technical fix to this problem without resources allocated is not likely an available option for you.
I apologize sincerely for not being able to give you a technology solution here, but this looks to be a solution that will require resources to be allocated to the problem and no simple shortcut or trick or magic silver bullet exists here.

How long does code last?

I'm in the process of going back over some of the more minor TODO's in my code. One of them is in a class that handles partial dates, e.g. Jan 2001. It works fine for dates that will be seen in our system (1990 - 2099) and gracefully fails for other dates.
The TODO that I've left for myself is that I don't handle dates in the century 2100 and beyond. I don't really think it worth the effort fixing this particular problem, but I am cognisant of the Y2k bugs. If we were in 2080 already I think I'd be thinking differently and would fix the bug.
So how long does code last for? How far ahead should we plan for our systems to keep running for?
Update
Ok, thanks for all your input. I think I'm going for the option of leave the TODO in the code and do nothing. The thoughts I found most interesting were:
#Adrian - Eternity, I think that's the most correct assumption, your point about VM's is a good one.
#jan-hancic - It depends, yes it does.
#chris-ballance - I'm guessing I'll be dead by the time this restriction is hit, so they can come defile my grave if they want, but I'll be dead, so I'll just haunt his ass.
The reason I decided to do nothing was simple. It added negligable business value, the other things that needed looking at did add value so I'll do them first and if I get the time I'll fix it, but really it'll be nothing more than an academic exercise.
Longer than you expect.
Eternity.
Given the trend that old system keep running in virtual machines, we must assume that all useful code will run forever. There are many system that run since the 60ies, eg backend code in financial sector, and there seems to be no indication that these systems will ever get replaced. (And in the meantime, the frontend is being replaced every other year with the latest fad in web technology. So, the closer your code is to the core of your system, the more likely it will run forever.)
You can't have a general answer here. Depends on what kind of project you are building.
If you are writing software for a space probe then you might want to code it so that it will work for the next 100 years and more.
But if you are programming a special Xmas offer for your company's web page, a few weeks should be enough ...
Assume that whoever will maintain the code is a psychopath and has your home address.
Nobody really knows. Professional programming has been around for 30-40 years, so nobody really knows if code is going to last for 100 years. But if the Y2K bug is an indication, it is that a lot of code is going to stick around for a lot longer than the programmer intended. Keep in mind that even if you take that into account, it could still stick around longer than you expected. No matter how much you prepare, it might still outlive it's intended life expectancy.
My advice is to not plan for code to last 100 years. Instead try to make sure all your code will work for the same length of time, that is, part of it should not fail in 2 years, while the other part should fail in 100 years. Remember, you should always fix the weakest link first, so there is no point making the strongest link stronger.
Sometimes, code lasts longer than you think. But, more important is the slippery slope argument. Once you forgive yourself a bit of non-bullet-proofness, you may be tempted to optimize further and skimp on logical correctness, until it finally bites you.
By the way, I recommend to have an issue ID (such as FogBugz case number) in every TODO comment, so that people can actually subscribe to and track this TODO.
in Dan Bernstein's immortal words: Don't contribute to the Y10K problem!
I don´t think the code will last so long.
Think about all the inventions and progress made in the last 90 years.
In 2100 we won´t have to write down code.
There will be some kind of brain-machine interface.
Well, we recently made a timestamp format where time is stored in a unsigned 64-bit integer as microseconds from 1970. It will last until the year 586912, which should be enough.
Coding for "forever" is unnecessary - of course you could use BigIntegers and such everywhere, but why? Just be prepared for more than 5 or 10 years. Twenty year old production code is not quite unusual nowadays, and I suspect that the average life cycle will get even longer in the near future.
It depends on how much business value the code has and how much resources it takes to write it from scratch. The more value and resources the longer it lasts. Ten years and more is typical for commercial "works, don't touch it" code.
I always tried to code like my applications must work "forever". I am very sure I wont be around anymore in 2100 but knowing my software has a build in expiration date doesn't make me feel good. If you know about such things try to avoid them! You will never know but some unknown programmer in the future may be grateful.
Right up until the time that it breaks, or otherwise ceases to be useful, and then for a bit longer after that
The essential things are:
How good is your internal date class (get a very robust library version and stick to it!)
It's not just the passage of time, but also the growth in the range of inputs your users want. For example, maybe you have 30 year mortgage inputs now, but next month someone decides to input a 99 year lease with maturity 2110, or a 100 year Disney bond!
If you accept 2 digit year inputs with a date window, think very carefully about how that is applied to start and end dates, and give lots of immediate feedback.
Here are my two cents:
When we design a project, we usually declare it to last "at least" 5 years. Usually no more than 10 years before we re-design it and build it all over. (We're talking about mid-large size projects here).
What usually happens is that the new project you build is supposed to replace the old one, either techonology wise (i.e. moving from MF to windows, VB to .net etc.), but this project never ends. So your client ends up working with 2 systems at once and that leftover system is what later is referred to as "legacy".
If you wait long enough, a third project will rise causing the client to work with 3 systems at once and so on...
But to answer your question, I would bet on 5-10 years before redesign, and unless your dates are supposed to be long into the future - no need to worry about the 2100 limitation.
IMHO it comes down to craftmanship : the pride we take in our work, coding to a standard we would not be ashamed another real coder to see.
In the case of dates like this, you've stated that it gracefully fails after 2100. This sounds like you can remove the TODO without a bad conscience, because you have built in a response that will allow the cause of failure to be easily diagnosed and fixed in the (however likely or unlikely) circumstance that a failure occurs.
There are examples of code running on older machines which is 40 or 50 years old.
(Interesting bits in this thread: http://developers.slashdot.org/developers/08/05/11/1759213.shtml).
You've got to ask yourself about the nature of the problem you're solving but generally speaking even "quick fixes" will be around for years so you could realistically be looking at a decade or more for code intended to have a decent shelf life.
The other things you need to think about is:
1) What is the "active life" of the application - that is where it's going to be used and processing.
2) What is the "inactive life" of the application - that is it's not going to be used day to day but might be used for retrieving and viewing old records. For instance UK audit law means that records need to be available for 7 years, so that's potentially 7 years from last system use.
3) What is the range of future data it needs to handle? For instance say you're taking down credit card expiry dates - you can have a card which won't expire for a decade. Can you handle that date?
The answers to these questions will generally lead you to the assumption that you should never knowingly write code which has date constraints beyond those the OS/Language you're using dictates.
The question isn't "How long does code last?" but rather "How long will things in my code affect an application?"
Even if your code is replaced, it's possible that it will get replaced with code that does the exact same thing. To some extent, this is the direct cause of the Y2K problem. More to the point, it is the direct cause of the Y2038 problem.
Also keep in mind what you mean by last.
For example, the original UNIX operating was developed 30 years ago. But during that 30 years, the product has evolved over time.
Though, it wouldn't surprise me if a little but of original code still exists in it today.
So think of it 2 ways ... 1) do you ever antisipate the code being touched in the future, 2) the product/code will evolve if you have support and involvmment.
My current shop has a large code base that runs financial applications with complex business rules. Some of these rules are encoded in stored procedures, some in triggers, and some in 3gl and 4gl application code. There is running code from the late 90's, and none of it in your "traditional" Legacy languages like COBOL or FORTRAN. As one could imagine, it's a steaming pile of spaghetti code, most created before TDD meant anything, so people are reluctant to touch anything.
I have had occasion to be brought in on contract more than a decade after the fact to consult on porting code to a new platform (OS/2 just isn't that popular these days!). When in doubt, assume that your code will live longer than you will. At the very least, document the heck out of limitations like this; fix them unless that takes tremendously more work than to document them.
In 1995 I started work at a new job, on an 8 year old code base.
So it's incept date was 1987 or thereabouts.
The code is probably still in service. Thats what ? 23 years.
There's been some moves of the company, but they probably kept the software ( because it works)
If it's still in service now, it will still be in service in a decade or so.
It's not surprising, particularly high tech code, in C (mostly)
In 1999 I started at a new job, the the codebase had antecedents back to 1984.
The new version I designed in the 2000's is still in service, with design elements like data file formats from the previous one ( and so on back) and that would be a development program over 26 years.
So the year 2086 problem is starting to loom a little as those 32 bit signed time_t's roll over.
Remember that one of the major bonuses of modern programming is reuse.
So that means the code you write originally to solve one problem may get re-purposed and used in a completely different scenario years later (maybe even without your knowledge, by a team mate).
As a side note:
One of the major pluses of automated unit testing, is testing code that you can't even remember is there in a system! :)
As long as people can continue to bill for support with people willing to pay for it.
3-5 years max. After that you have moved on to another job and left your crappy code behind.

How to measure usability to get hard data?

There are a few posts on usability but none of them was useful to me.
I need a quantitative measure of usability of some part of an application.
I need to estimate it in hard numbers to be able to compare it with future versions (for e.g. reporting purposes). The simplest way is to count clicks and keystrokes, but this seems too simple (for example is the cost of filling a text field a simple sum of typing all the letters ? - I guess it is more complicated).
I need some mathematical model for that so I can estimate the numbers.
Does anyone know anything about this?
P.S. I don't need links to resources about designing user interfaces. I already have them. What I need is a mathematical apparatus to measure existing applications interface usability in hard numbers.
Thanks in advance.
http://www.techsmith.com/morae.asp
This is what Microsoft used in part when they spent millions redesigning Office 2007 with the ribbon toolbar.
Here is how Office 2007 was analyzed:
http://cs.winona.edu/CSConference/2007proceedings/caty.pdf
Be sure to check out the references at the end of the PDF too, there's a ton of good stuff there. Look up how Microsoft did Office 2007 (regardless of how you feel about it), they spent a ton of money on this stuff.
Your main ideas to approach in this are Effectiveness and Efficiency (and, in some cases, Efficacy). The basic points to remember are outlined on this webpage.
What you really want to look at doing is 'inspection' methods of measuring usability. These are typically more expensive to set up (both in terms of time, and finance), but can yield significant results if done properly. These methods include things like heuristic evaluation, which is simply comparing the system interface, and the usage of the system interface, with your usability heuristics (though, from what you've said above, this probably isn't what you're after).
More suited to your use, however, will be 'testing' methods, whereby you observe users performing tasks on your system. This is partially related to the point of effectiveness and efficiency, but can include various things, such as the "Think Aloud" concept (which works really well in certain circumstances, depending on the software being tested).
Jakob Nielsen has a decent (short) article on his website. There's another one, but it's more related to how to test in order to be representative, rather than how to perform the testing itself.
Consider measuring the time to perform critical tasks (using a new user and an experienced user) and the number of data entry errors for performing those tasks.
First you want to define goals: for example increasing the percentage of users who can complete a certain set of tasks, and reducing the time they need for it.
Then, get two cameras, a few users (5-10) give them a list of tasks to complete and ask them to think out loud. Half of the users should use the "old" system, the rest should use the new one.
Review the tapes, measure the time it took, measure success rates, discuss endlessly about interpretations.
Alternatively, you can develop a system for bucket-testing -- it works the same way, though it makes it far more difficult to find out something new. On the other hand, it's much cheaper, so you can do many more iterations. Of course that's limited to sites you can open to public testing.
That obviously implies you're trying to get comparative data between two designs. I can't think of a way of expressing usability as a value.
You might want to look into the GOMS model (Goals, Operators, Methods, and Selection rules). It is a very difficult research tool to use in my opinion, but it does provide a "mathematical" basis to measure performance in a strictly controlled environment. It is best used with "expert" users. See this very interesting case study of Project Ernestine for New England Telephone operators.
Measuring usability quantitatively is an extremely hard problem. I tackled this as a part of my doctoral work. The short answer is, yes, you can measure it; no, you can't use the results in a vacuum. You have to understand why something took longer or shorter; simply comparing numbers is worse than useless, because it's misleading.
For comparing alternate interfaces it works okay. In a longitudinal study, where users are bringing their past expertise with version 1 into their use of version 2, it's not going to be as useful. You will also need to take into account time to learn the interface, including time to re-understand the interface if the user's been away from it. Finally, if the task is of variable difficulty (and this is the usual case in the real world) then your numbers will be all over the map unless you have some way to factor out this difficulty.
GOMS (mentioned above) is a good method to use during the design phase to get an intuition about whether interface A is better than B at doing a specific task. However, it only addresses error-free performance by expert users, and only measures low-level task execution time. If the user figures out a more efficient way to do their work that you haven't thought of, you won't have a GOMS estimate for it and will have to draft one up.
Some specific measures that you could look into:
Measuring clock time for a standard task is good if you want to know what takes a long time. However, lab tests generally involve test subjects working much harder and concentrating much more than they do in everyday work, so comparing results from the lab to real users is going to be misleading.
Error rate: how often the user makes mistakes or backtracks. Especially if you notice the same sort of error occurring over and over again.
Appearance of workarounds; if your users are working around a feature, or taking a bunch of steps that you think are dumb, it may be a sign that your interface doesn't give the tools to figure out how to solve their problems.
Don't underestimate simply asking users how well they thought things went. Subjective usability is finicky but can be revealing.

How would you go about reverse engineering a set of binary data pulled from a device?

A friend of mine brought up this questiont he other day, he's recently bought a garmin heart rate moniter device which keeps track of his heart rate and allows him to upload his heart rate stats for a day to his computer.
The only problem is there are no linux drivers for the garmin USB device, he's managed to interpret some of the data, such as the model number and his user details and has identified that there are some binary datatables essentially which we assume represent a series of recordings of his heart rate and the time the recording was taken.
Where does one start when reverse engineering data when you know nothing about the structure?
I had the same problem and initially found this project at Google Code that aims to complete a cross-platform version of tools for the Garmin devices ... see: http://code.google.com/p/garmintools/. There's a link on the front page of that project to the protocols you need, which Garmin was thoughtful enough to release publically.
And here's a direct link to the Garmin I/O specification: http://www.garmin.com/support/pdf/IOSDK.zip
I'd start looking at the data in a hexadecimal editor, hopefully a good one which knows the most common encodings (ASCII, Unicode, etc.) and then try to make sense of it out of the data you know it has stored.
As another poster mentioned, reverse engineering can be hairy, not in practice but in legality.
That being said, you may be able to find everything related to your root question at hand by checking out this project and its' code...and they do handle the runner's heart rate/GPS combo data as well
http://www.gpsbabel.org/
I'd suggest you start with checking the legality of reverse engineering in your country of origin. Most countries have very strict laws about what is allowed and what isn't regarding reverse engineering devices and code.
I would start by seeing what data is being sent by the device, then consider how such data could be represented and packed.
I would first capture many samples, and see if any pattern presents itself, since heart beat is something which is regular and that would suggest it is measurement related to the heart itself. I would also look for bit fields which are monotonically increasing, as that would suggest some sort of time stamp.
Having formed a hypothesis for what is where, I would write a program to test it and graph the results and see if it makes sense. If it does but not quite, then closer inspection would probably reveal you need some scaling factors here or there. It is also entirely possible I need to process the data first before it looks anything like what their program is showing, i.e. might need to integrate the data points. If I get garbage, then it is back to the drawing board :-)
I would also check the manufacturer's website, or maybe run strings on their binaries. Finding someone who works in the field of biomedical engineering would also be on my list, as they would probably know what protocols are typically used, if any. I would also look for these protocols and see if any could be applied to the data I am seeing.
I'd start by creating a hex dump of the data. Figure it's probably blocked in some power-of-two-sized chunks. Start looking for repeating patterns. Think about what kind of data they're probably sending. Either they're recording each heart beat individually, or they're recording whatever the sensor is sending at fixed intervals. If it's individual beats, then there's going to be a time delta (since the last beat), a duration, and a max or avg strength of some sort. If it's fixed intervals, then it'll probably be a simple vector of readings. There'll probably be a preamble of some sort, with a start timestamp and the sampling rate. You can try decoding the timestamp yourself, or you might try simply feeding it to ctime() and see if they're using standard absolute time format.
Keep in mind that lots of cheap A/D converters only produce 12-bit outputs, so your readings are unlikely to be larger than 16 bits (and the high-order 4 bits may be used for flags). I'd recommend resetting the device so that it's "blank", dumping and storing the contents, then take a set of readings, record the results (whatever the device normally reports), then dump the contents again and try to correlate the recorded results with whatever data appeared after the "blank" dump.
Unsure if this is what you're looking for but Garmin has created an API that runs with your browser. It seems OSX is supported, as well as Windows browsers... I would try it from Google Chromium to see if it can be used instead of this reverse engineering...
http://developer.garmin.com/web-device/garmin-communicator-plugin/
API Features
Auto-detection of devices connected to a computer Access to device
product information like product name and software version Read
tracks, routes and waypoints from supported recreational, fitness and
navigation devices Write tracks, routes and waypoints to supported
recreational, fitness and navigation devices Read fitness data from
supported fitness devices Geo-code address and save to a device as a
waypoint or favorite Read and write Garmin XML files (GPX and TCX) as
well as binary files. Support for most Garmin devices (USB, USB
mass-storage, most serial devices) Support for Internet Explorer,
Firefox and Chrome on Microsoft Windows. Support for Safari, Firefox
and Chrome on Mac OS X.
Can you synthesize a heart beat using something like a computer speaker? (I have no idea how such devices actually work). Watch how the binary results change based on different inputs.
Ripping apart the device and checking out what's inside would probably help too.