Just curious what is the optimal delay for 'Update Later' option in software update dialog? 1 Week, 1 Month, till next program restart? So user doesn't get bothered too much, but in the same time not get stacked with outdated software?
Following picture illustrates what i mean
(I am not allowed to embed images yet to stackoverflow, so below is just plain link):
image
Maybe in some cases using 'Skip This Version' will be more 'user-friendly' instead of 'Update Later'?
The remind me later usually depends on the type of application you are making.
You have really three options. Remind Me Later reminds you after the next time you start the application. This is useful for applications that are started one a day or so.
Remind me in 5,15,30 minutes. Useful for applications where the user wants needs the newest features but wants to finish doing whatever they do in your app.
Auto update, tell them a new update is available and ask them when they want it to be install (5 Minutes, 15 Minutes, 1 Hour, Next Application Start).
Skip this version isn't recommended unless the applications create its own data and doesn't import data from previous versions of the application.
Remember you want to KISS it (Keep it Super Simple).
Related
Hi guys I am constructing a task distribution management system for my team in which I want to add a functionality that:
When I create a task, I will have an option to choose "how long is this task valid for being taken". For example, when creating the task I put "2 hours" in the
<input id="valid-for">
, then this task will only be displayed on the dashboard for 2 hours from the time it was created and then after 2 hours -> "display: none".
I've searched the web for the mechanism of achieving this feature but I didn't get a satisfied answer probably because I don't know the right terminology to google. I tried to use AJAX and use TIME_STAMP type attribute in MySQL but didn't know how to proceed. Could anybody tell me how to achieve this feature by the use of MySQL, jQuery or any other technics that could fulfill this feature? No code necessary I just need some explanation.
Thanks guys!
Without knowing any more details, here is how I would consider writing the code:
In the database, have a start time and a use-by time.
In your browser page, you can run a script periodically, say every minute (this is called polling). In this case, you can use Ajax to call back to the server for updates.
At the server end, check for new tasks as well as expired tasks. Then send the results back to the Ajax caller.
Back at the browser, update the dashboard accordingly.
I would be inclined to remove the task on the browser rather than simply hide it.
I am new to programming and am working on pushing real time data from a PLC to a web page either by deploying HTML 5 on the WAGO or a Modbus driver wrapper. I honestly have tried to research but don't know where to start. it will be a closed private network with little to no influence from the outside web. I am simply looking to display a single piece of live information for proof of concept. basically I'm trying to custom design a Groov program.
You might want to look into using OPC. Kepware & SoftwareToolbox are just 2 of many vendors that offer tools to help you get your data the way you want it.
There is an existing tool to do what you want, but I am under the impression you have to write one from scratch. The existing tool is http://www.softwaretoolbox.com/cogentdatahub/ if you are interested in looking at it for ideas.
I've been able to interface with PLC using python and modbusTCP and an Raspberry pi as the webserver. Python is a quick and easy to learn language. Websockets are the HTML5 component best used for realtime data.
simple connect code (after you install everything):
from pymodbus.client.sync import ModbusTcpClient as ModbusClient
from time import sleep
client = ModbusClient('ip_address_of_modbus_IO')
if(client.connect()):
print(client.read_discrete_inputs(200,1).bits[0])
client.write_coil(0,True)
sleep(100)
client.write_coil(2,True)
found here:
http://simplyautomationized.blogspot.com/2013/09/home-automation-project-2-rpi-light.html
Can create a websocket broadcast server using an example here:
http://simplyautomationized.blogspot.com/2015/09/raspberry-pi-create-websocket-api.html
Fortunately you can not push data to a browser.
The Internet would become an even greater mess if you could.
To solve this, have your webpage contain a timer, written in JavaScript.
Every say 1 sec. it does an AJAX request (e.g. use jQuery implementation) to the server, which then delivers (almost) realtime data.
The webpage then displays that in some DOM element, e.g. an empty DIV.
So it's the browser polling your server.
#BlueDog
The data is "almost" realtime because sampling once a second gives a delay of at least one second. In the ideal case, as soon as data changes, it would be pushed to the browser. Unfortunately the browser has no way of knowing that anything changed, so the best it can do is frequently "ask" for updates (polling).
How much the delay is depends on your poll frequency. If it's once per second one has to add the delays for transmission of the page request and the reply of the server. The transmission time depends on your network (which may be the Internet with all uncertainty involved). If the backbones involved have enough capacity I expect overall delay to be between 1 and 1.5 seconds. With a dedicated network and even more frequent polling, I expect that 0.5 seconds should be possible. These are however estimated averages. If I request a page over the Internet and my provider (again) has a problem, it may be hours before I receive what I want. Also things like virus scanners and OS updates may spoil your game.
So, practically: with a good broadband connection, a stable browser and the right process priorities it should be possible to get below 1 second overall delay (incl. poll time interval) for 95% of the time. Be prepared to reboot the client every few days. Most browsers leak memory and most OS'es do so too.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I have a decently popular Chrome extension, and yesterday I accidentally released a corrupted version of it, and didn't catch it for 10 hours. Within those 10 hours, the extension was updated for most users and I lost about half of them over the night based off of my Google Analytics reports (I had about 600 pageviews every 30 minutes, and now I only have 285). When I found out about my mistake I quickly reverted to an older version that works, but now, about 30 hours after an update that fixes the corruption, my pageviews are still the same.
My questions are:
Have I lost all those users or has it simply not updated for them yet?
If an extension is corrupt does it still check for updates or do the users have to press repair?
Any insight would be fantastic. As you can imagine, losing half your users over night because of 1 line of code is difficult to process.
What could have happened?
Well, this isn't a really clear situation, but, based on your information, there are a few possible scenarios:
Your users have uninstalled your extension because of this corrupt version. In this case (the worst one), it's pretty impossible to bring your users back, unfortunately.
Your "corrupted" version had issues with the update handling. For example issues with chrome.runtime.onInstalled to check the update and add new features, or issues with the Analytics part. This means that:
Your extension worked fine before the update.
It has been updated with a broken update handling function/method.
The new update (rollback to a working version) didn't solve anything, because your already corrupted extension is now unable to apply updates and/or send page views to analytics.
Your users disabled your extension in an attempt to narrow the issues (that's very uncommon, an edge case).
Your users didn't get the new update yet (which after thirty hours is also pretty uncommon).
What could you do?
Again, let's split the situation up:
In the first case, you cannot really do anything, unfortunately. That was a bad mistake! Learn from it and always test a thousand times before pushing updates.
Thinking about the second case? You should test your corrupted version on your machine, maybe using Chrome Canary to make things faster. This obviously means that you should have the previous versions stored somewhere; if you haven't got them, then it gets pretty hard, and for the future: always keep backups of your previous versions. Installing the old version, then manually updating to the corrupted one, and finally to the last one, can really help you understand what's going on. You should meticulously check the update method and see if there's something wrong.
Note: if you're not listening to chrome.runtime.onUpdateAvalilable and manually calling chrome.runtime.reload() to update your extension immediately, a Chrome restart may be necessary for it to update.
Just wait, although that's uncommon for this situation to happen, but this also is the only thing you can do in such cases.
Well, same as case #3.
If an extension is corrupt does it still check for updates or do the users have to press repair?
Well, there's no such thing as a "corrupted extension". Chrome will always check for updates (at least if the user didn't disable them in the chrome://flags), even if your extension is just a bunch of SyntaxErrors. Don't worry about this.
Extreme fix
In case that you're not sure about what to do, a re-design of your extension and a drastical purge from all the to-dos and bad practices is always a good thing. Just backup your previous version first, and start working on the 2.0 beta! Possibly, updating and improving the extension will bring much more users to it than just solving an existing issue. Personally, I almost always experienced an installation increase performing drastic restyles and re-writing better code from scratch.
I wish you to find the problem and bring your users back ASAP.
As you can imagine, losing half your users over night because of 1 line of code is difficult to process.
Yes, I do imagine. As an extension developer I really suffered for similar mistakes when I was learning update handling. So... well, break a leg! Wish you the best.
I have a Microsoft Access business critical database that was originally created in the 90's and has been enlarged and upgraded up to Access 2007 at this point. We have been using this database as a front end for a custom written ERP system essentially. We have moved most of the data over to an SQL server long ago, but we are still using MS Access as a front end. AS the project grows, we have a full time developer, we have started having stability problems and extremely frequent crashes of unknown causes.
As an example: 1 time out of 10 or so, a certain form will crash if I change the data in 1 specific field. There is no code firing at the time, the data is in a local temporary table that typically only has 5 rows most of the time. If I change the data in the table nothing goes wrong, but if I change it on the form Access will hard crash and dump me to the desktop. There are other examples I could provide of unexplained crashes
I am looking for advice on where to go at this point -- the access front end has all of the business logic for running our company essentially so I can't just abandon it. Ideally we would re-write the entire front end in some other language. The problem is that as a small company we don't have the resources to re-write the entire system in anything resembling a good time frame, and don't have the cash flow to pay someone else to do it. My ideal solution would be a conversion of some sort from the access front end to another end point -- whether web or local windows -- but my searches here and on google make that seem like a non-starter.
So essentially every avenue I look at seems to be a dead-end:
We can't find the source of the crashes to stabilize our current system,
We can't stop production in our current system for as long as it would take to re-write it,
We can't afford to pay someone else to write a new system,
Automated conversion tools seem like a waste of money
Are there other options or which of the options that I have thought of seems best?
We have an enterprise level program with an Access front end and an SQL Server back end. I wonder if it might not help to split the program up into different pieces for diagnostics. For instance if you have Order Entry and Inventory Management you could have one front end for each function. (Yes I can hear the howling in the background but if it was only for the purpose of diagnosis maybe it would help... )
You can also export the Access Database objects to text files and then import them into a fresh new database to get rid of weird errors in some occasions.
Well, I guess I will rephrase the basic issue here. If you have two Computing Science graduates on staff then they should have long ago anticipated that they reached the limits of Access. I fail to see this as any different than an overworked, oven in a restaurant that now have too many customers or a delivery truck that does not have the capacity to deliver goods to customers.
Since funds don't exist to re-write then your staff failed to put aside funds on a monthly bases to deal with this situation and now your choices as a result of this delayed action to deal with this growing problem places you in a difficult situation.
The computing science people you have on staff should have long ago seen this wall and limit you hit coming. In my experience is with most CS people is they consider Access rather limited, and thus even MORE alarm bells should have been ringing here and this means even less excuse exists for you to be placed in this unfortunate predicament.
So, assuming the computing science staff you have are well maintaining this application (and I graciously accept this is the case), then then a logical conclusion is this application has reached or exceeded the limits of Access. As noted such limits should have been long ago anticipated.
As you well point out that funds now do not exist for a re-write, then few choices exist without such funds.
However, you case may not be so bad since we NOW know you have experienced developers on staff. Given this case, then my suggestion would be to consider breaking out modules or small manageable parts and features one at a time from the application and having your well trained and experienced developers build either a web interface, or perhaps even using something like .net if you wish to stay 100% desktop. So this "window" of opportunity is a great chance to consider a change in architecture)
Since the data limits are SQL server and NOT Access, then both applications (existing access front end) and the new parts can BOTH well and easy operate on the same data. As you do this, you then break out and remove the existing parts from the Access application. This would suggest you eventaully return to accetable stability in the Access applicaiton. At that point , you could continue, or stop to save funds.
As noted, without funds to re-write, then the only choice is to find some means to free up SOME limited resources on a monthly basis to solve this problem.
At the end of the day, the solution to this problem is more resources, but without such resources then few technical choices and options exist here. Based on the information given so far YOU have made it clear you don't have resources here. However, the solution to this problem here requires resource allocation and planning.
In other words a technical fix to this problem without resources allocated is not likely an available option for you.
I apologize sincerely for not being able to give you a technology solution here, but this looks to be a solution that will require resources to be allocated to the problem and no simple shortcut or trick or magic silver bullet exists here.
On a wiki-style website, what can I do to prevent or mitigate write-write conflicts while still allowing the site to run quickly and keeping the site easy to use?
The problem I foresee is this:
User A begins editing a file
User B begins editing the file
User A finishes editing the file
User B finishes editing the file, accidentally overwriting all of User A's edits
Here were some approaches I came up with:
Have some sort of check-out / check-in / locking system (although I don't know how to prevent people from keeping a file checked out "too long", and I don't want users to be frustrated by not being allowed to make an edit)
Have some sort of diff system that shows an other changes made when a user commits their changes and allows some sort of merge (but I'm worried this will hard to create and would make the site "too hard" to use)
Notify users of concurrent edits while they are making their changes (some sort of AJAX?)
Any other ways to go at this? Any examples of sites that implement this well?
Remember the version number (or ID) of the last change. Then read the entry before writing it and compare if this version is still the same.
In case of a conflict inform the user who was trying to write the entry which was changed in the meantime. Support him with a diff.
Most wikis do it this way. MediaWiki, Usemod, etc.
Three-way merging: The first thing to point out is that most concurrent edits, particularly on longer documents, are to different sections of the text. As a result, by noting which revision Users A and B acquired, we can do a three-way merge, as detailed by Bill Ritcher of Guiffy Software. A three-way merge can identify where the edits have been made from the original, and unless they clash it can silently merge both edits into a new article. Ideally, at this point carry out the merge and show User B the new document so that she can choose to further revise it.
Collision resolution:
This leaves you with the scenario when both editors have edited the same section. In this case, merge everything else and offer the text of the three versions to User B - that is, include the original - with either User A's version in the textbox or User B's. That choice depends on whether you think the default should be to accept the latest (the user just clicks Save to retain their version) or force the editor to edit twice to get their changes in (they have to re-apply their changes to editor A's version of the section).
Using three-way merging like this avoids lock-outs, which are very difficult to handle well on the web (how long do you let them have the lock?), and the aggravating 'you might want to look again' scenario, which only works well for forum-style responses. It also retains the post-respond style of the web.
If you want to Ajax it up a bit, dynamically 3-way merge User A's version into User B's version while they are editing it, and notify them. Now that would be impressive.
In Mediawiki, the server accepts the first change, and then when the second edit is saved a conflicts page comes up, and then the second person merges the two changes together. See Wikipedia: Help:Edit Conflicts
Using a locking mechanism will probably be the easiest to implement. Each article could have a lock field associated with it and a lock time. If the lock time exceeded some set value you'd consider the lock to be invalid and remove it when checking out the article for edit. You could also keep track of open locks and remove them on session close. You'd also need to implement some concurrency control in the database (autogenerated timestamps, perhaps) so that you could make sure that you are checking in an update to the version that you checked out, just in case two people were able to edit the article at the same time. Only the one with the correct version would be able successfully check in an edit.
You might also be able to find a difference engine that you could just use to construct differences, though displaying them in a wiki editor may be problematic -- actually displaying the differences is probably harder than constructing the diff. You'd rely on the versioning system to detect when you needed to reject an edit and perform a diff.
In Gmail, if we are writing a reply to a mail and someone else sends a reply while we are still typing it, a popup appears indicating that there is a new update and the update itself appears as another post without a page reload. This approach would suit your needs and if you can use Ajax to show the exact post with a link to diff of what was just updated while User B is still busy typing his entry that would be great.
As Ravi (and others) have said, you could use an AJAX approach and inform the user when another change is in progress. When an edit is submitted, just indicate the textual differences and let the second user work out how to merge the two versions.
However, I'd like to add on with something new you could try in addition to that: Open a chat dialog between the editors while they're doing their edits. You could use something like embedded Gabbly for that, for instance.
The best conflict resolution is direct dialog, I say.
Your problem (lost update) is solved best using Optimistic Concurrency Control.
One implementation is to add a version column in each editable entity of the system. On user edit you load the row and display the html form on the user. A hidden field gives the version, let's say 3. The update query needs to look something like:
update articles set ..., version=4 where id=14 and version=3;
If rows returned is 0 then someone has already updated article 14. All you need to do then is how to deal with the situation. Some common solutions:
last commit wins
first commit wins
merge conflicting updates
let the user decide
Instead of an incrementing version int/long you can use a timestamp but it's not suggested because:
retrieving the current time from the JVM isn't necessarily safe in a clustered environment, where nodes may not be time synchronized.
(quote from Java Persistence with Hibernate)
Some more info at the hibernate documentation.
At my office, we have a policy that all data tables contain 4 fields:
CreatedBy
CreatedDate
LastUpdateBy
LastUpdateDate
That way there is a nice audit trail on who has done what to the records, at least most recently.
But most importantly, it becomes easy enough to compare the LastUpdateDate of the current or edited record on the screen (requires you to store it on the page, in a cookie, whatever, with the value in the database. If the values don't match, you can decide what to do from there.