Users need to select a car.
We have several dropdowns when picking a car in order to pick the year, make, model and submodel.
Initially we don't know what to use for the select options for make/model/submodel as they are interdependent.
Once we pick year we use ajax to make requests which query ActiveRecord to populate the make dropdown.
Then when we pick make we use ajax to query and populate the model dropdown.
Then when we pick model we ajax to query and populate the submodel dropdown.
The problem is that this is a lot of separate network requests and in real-world conditions of low bandwidth, network issues, etc. quite often there are pauses severely impacting the user experience and occasionally leading to failures.
What approaches could help avoid all these network requests. In there an approach would could store all of the several thousand makes-model combinations on the client browser?
Currently the data is stored in a sql database accessed via ActiveRecord in the Rails framework. Each dropdown selection results in another query because yuou can't show populate and show make until you know year and you can't populate and show model until you know make. Same for submodel (though I've omitted submodel from the rest of this post for simplicity!).
Would session (http://simonsmith.io/speeding-things-up-with-sessionstorage/) storage of the JSON data for 10,000 combinations be possible? I see that sessionStorage can generally be relied on to have at least 5MB(5,200,000 bytes) so that gives me 5200000/10000= 520 bytes per record. Probably enough? If this persists for the session and across pages then in many cases we could actually kick this off on the previous page and if that had time to finish we wouldn't need the external call at all on the relevant (next) page.
We would need to refresh that data either occasionally or on demand as new year-make-models are added periodically (several times a year).
Ultimately I think the solution here could be very useful to a large number of applications and companies. The example here of picking a vehicle itself it used by dozens of major car insurance websites (who all do the multiple calls right now). The general appraoch of storing client side data for relatioship dependent sdropdown could also mapply in many other situations such as online shopping for make-brand-year-model. The backend framework to populate sessionStorage could also be done via different backend frameworks.
Another options might be to try google's Lovefield - https://github.com/google/lovefield More at https://www.youtube.com/watch?v=S1AUIq8GA1k
It's open source and works in ff, chrome, IE, Safari, etc.
Seems like sessionStorage might be better for our (considerable) business than basing it on a google 100 day dev thing - though it is open source.
Hello you can create the JSON object
for all the detail and based on the Value selected you can loop the array and populate the value. Let me
var cardetail = [{
"name": "MARUTI",
"model": [{
"name": "SWIFT",
"year": ["2005", "2006", "2008"]
}, {
"name": "ALTO",
"year": ["2009", "2010", "2011"]
}]
}, {
"name": "Hundai",
"model": [{
"name": "I20",
"year": ["2011", "2012", "2013"]
}, {
"name": "I20",
"year": ["2013", "2014", "2015"]
}]
}];
var currentCumpany = null;
var currentModel = null;
$(document).ready(function() {
$("#company").append("<option value=''>Select Company</option>");
for (i = 0; i < cardetail.length; i++) {
$("#company").append("<option value='" + cardetail[i].name + "'>" + cardetail[i].name + "</option>");
};
$("#company").change(function() {
for (i = 0; i < cardetail.length; i++) {
if (cardetail[i].name == $("#company").val()) {
currentCumpany = cardetail[i];
}
};
$("#model").html("");
for (i = 0; i < currentCumpany.model.length; i++) {
$("#model").append("<option value='" + currentCumpany.model[i].name + "'>" + currentCumpany.model[i].name + "</option>");
};
});
$("#company").change(function() {
for (i = 0; i < cardetail.length; i++) {
if (cardetail[i].name == $("#company").val()) {
currentCumpany = cardetail[i];
}
};
$("#model").html("");
for (i = 0; i < currentCumpany.model.length; i++) {
$("#model").append("<option value='" + currentCumpany.model[i].name + "'>" + currentCumpany.model[i].name + "</option>");
};
});
$("#model").change(function() {
for (i = 0; i < currentCumpany.model.length; i++) {
if (currentCumpany.model[i].name == $("#model").val()) {
currentModel = currentCumpany.model[i];
}
};
$("#year").html("");
for (i = 0; i < currentModel.year.length; i++) {
$("#year").append("<option value='" + currentModel.year[i] + "'>" + currentModel.year[i] + "</option>");
};
});
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<select id="company"></select>
<select id="model"></select>
<select id="year"></select>
First, unless the requisite bandwidth is too expensive you could conceivably check the cache and then start making requests for popular makes/models/submodels as soon as (or even before) the user picks a year and cache it. There's even a full RDBMS for the browser now (full disclosure: its new and I haven't played with it much) which sits atop indexDB.
In terms of picking which ones to preload, you could do it based on units produced, units sold, car and driver magazine rankings, data-mining your actual users' requests, whatever.
I'm of the opinion that from a UX perspective you should at least be caching the requests the user actually makes and offering an option on load to jump right back to the last year/make/model they searched for rather than having them enter it all fresh each visit. Having popular vehicles preloaded only makes things easier. How much you want to push the envelope with predictive analysis of what a given user is likely to search for is up to your team skills/budget/time constraints.
I realize that this isn't a full answer per se, I'm not sure as stated the question has one (e.g. 'use this strategy/framework/library and all your problems will magically disappear! it even makes julienned fries!'). But if faced with this kind of problem my first thought is how to get more (hopefully relevant) data to the client sooner, which hopefully translates to faster (in the UX sense of fast).
I would also recommend that you have that popular data in json files to request rather than have to hit Rails/ActiveRecord/Database server each time. That alone would shave valuable milliseconds off your response times (not to mention usage load on those machines).
Its not like that data really changes, a 2009 Toyota Rav 4 has the same specs it did in...2009.
Related
I'm trying to write a userscript for a friend. The Website I'm writing it for (app.patientaccess.com) tells you what doctors appointments you have, (among other things). However, in order to write my userscript, I need to know how the app handle appointments for the following year.
At the moment, the only way to know is to wait until the end of the year when my friend starts making appointments for the following year. Since it's an Angular app, I'd rather, if possible, point it to a fabricated JSON file of my creation when the app requests that particular data. In that file I can give it some data for this year and next year and then I can see what happens with appointments made for the following year.
I'm hoping this can be done with an addon for Chrome or Firefox or perhaps some kind of free/open source software.
Thanks in advance.
I came up with a function that will accurately guess there year, given the day name, date and month, if it's within a couple of years either side of the current year.
function calculateYear(dayName, dayOfMonth, monthNum, returnDateObj) {
monthNum -= 1;
maxIterations = 3;
var startYear = (new Date()).getFullYear();
var dateObj = new Date(startYear, monthNum, dayOfMonth);
for (var i = 0; i < maxIterations; i++) {
dateObj.setYear(startYear + (1 * i));
if (dayName == daysOfTheWeek[dateObj.getDay()]) {
return (returnDateObj) ? dateObj : dateObj.getFullYear();
}
dateObj.setYear(startYear - (i + 1));
if (dayName == daysOfTheWeek[dateObj.getDay()]) {
return (returnDateObj) ? dateObj : dateObj.getFullYear();
}
}
return 'No Match';
}
It works a treat, as you can see here.
A client wants to set up A/B testing on the Product Detail Page related to the stock_level of a product's variants. Once the user selects their options, if the quantity is less than 5, I'd show something like "Hurry, only 3 more in stock"...
I believe I have the correct Inventory settings enabled, because I can retrieve the stock_level of a product without options.
Has anyone had success pulling variant SKU stock_levels in stencil?
Thanks
This can be done using javascript in the assets/js/theme/common/product-details.js file. On initial page load and each time a product option is changed, there is a function updateView(data) that is called. The data parameter contains all the info you need for the selected variation.
Starting on line 285, replace this:
updateView(data) {
const viewModel = this.getViewModel(this.$scope);
this.showMessageBox(data.stock_message || data.purchasing_message);
with this:
updateView(data) {
const viewModel = this.getViewModel(this.$scope);
if(data.stock < "5") {
data.stock_message = "Hurry, only " + data.stock + " left!";
}
this.showMessageBox(data.stock_message || data.purchasing_message);
I need to try to understand how MySQL processes/connections work. I have googled and dont see anything in laymans terms so I'm asking here. Here is the situation.
Our host is giving us grief over "too many MySQL processes". We are on a shared server. We are allowed .2 of the server mySQL processes - which they claim is 50 connections - and they say we are using .56.
From the technical support representative:
"Number of MySQL procs (average) - 0.59 meant that you were using
0.59% of the total MySQL connections available on the shared server. The acceptable value is 0.20 which is 50 connections. "
Here is what we are running:
Zen Cart: 1.5.1 35K products. Auto updating of 1-20
products every 10 hours via cron.
PHP version 5.3.16
MySQL version 5.1.62-cll
Architecture i686
Operating system linux
We generally have about 5000 hits per day on the site and Google bot loves to visit even though I have the crawl rate set to minimum in Google webmaster tools.
I'm hoping someone can explain MySQL processes to me in terms of what this host is talking about. Every time I ask them I get an obfuscated answer that is vague and unclear. Is a new MySQL process created every time a visitor visits the site? That does not seem right.
According to the tech we were using 150 connections at that particular time.
EDIT:
here is the connection function in zencart
function connect($zf_host, $zf_user, $zf_password, $zf_database, $zf_pconnect = 'false', $zp_real = false) {
$this->database = $zf_database;
$this->user = $zf_user;
$this->host = $zf_host;
$this->password = $zf_password;
$this->pConnect = $zf_pconnect;
$this->real = $zp_real;
if (!function_exists('mysql_connect')) die ('Call to undefined function: mysql_connect(). Please install the MySQL Connector for PHP');
$connectionRetry = 10;
while (!isset($this->link) || ($this->link == FALSE && $connectionRetry !=0) )
{
$this->link = #mysql_connect($zf_host, $zf_user, $zf_password, true);
$connectionRetry--;
}
if ($this->link) {
if (#mysql_select_db($zf_database, $this->link)) {
if (defined('DB_CHARSET') && version_compare(#mysql_get_server_info(), '4.1.0', '>=')) {
#mysql_query("SET NAMES '" . DB_CHARSET . "'", $this->link);
if (function_exists('mysql_set_charset')) {
#mysql_set_charset(DB_CHARSET, $this->link);
} else {
#mysql_query("SET CHARACTER SET '" . DB_CHARSET . "'", $this->link);
}
}
$this->db_connected = true;
if (getenv('TZ') && !defined('DISABLE_MYSQL_TZ_SET')) #mysql_query("SET time_zone = '" . substr_replace(date("O"),":",-2,0) . "'", $this->link);
return true;
} else {
$this->set_error(mysql_errno(),mysql_error(), $zp_real);
return false;
}
} else {
$this->set_error(mysql_errno(),mysql_error(), $zp_real);
return false;
}
I wonder if it is a problem with connection pooling. Try changing this line:
$this->link = #mysql_connect($zf_host, $zf_user, $zf_password, true);
to this:
$this->link = #mysql_connect($zf_host, $zf_user, $zf_password);
The manual is useful here - the forth parameter is false by default, but your code is forcing it to be true, which creates a new connection even if an existing one is already open (this is called connection pooling and saves creating new connections unnecessarily i.e. saves both time and memory).
I would offer a caveat though: modifying core code in a third-party system always needs to be done carefully. There may be a reason for the behaviour they've chosen, though there's not much in the way of comments to be able to tell. It may be worth asking a question via their support channels to see why it works this way, and whether they might consider changing it.
I was wondering if there is a limitation of the number of images that are tagged that you can return?
This is my code:
<script type="text/javascript">
$(function() {
$.ajax({
type:'GET',
dataType:'jsonp',
cache: false,
url:'https://api.instagram.com/v1/tags/[TAG NAME]/media/recent?client_id=[CLIENT ID]',
success: function(data) {
for (var i = 0; i < 50; i++) {
$(".pics").append("<li><a target='_blank' href='" + data.data[i].link +
"' class='upshot-instagram' rel='instagram-group'><img src='" + data.data[i].images.thumbnail.url +"' ></img></a></li>");
}
}
});
});
</script>
I have 50 returning but I am only getting 20 images coming back to me. I know we have over 250 that have been tagged.
The API will only return 20 images per call. This is where the pagination data will come in handy, you can use the MAX_ID provided by the Instagram API, read more here.
This is in PHP and jQuery, but it can help you get on the right track: load more example
The default number of items returned is 20. However, you can specify more using a count parameter. At the time of this writing, it appears to be 33, but that could change as it is undocumented.
So the answer to your question is - 33, but it can (and probably will) change.
To answer your problem, you can use Instafetch, a script I wrote, which will paginate and filter for you, and allows you to fetch as many pictures as you specify.
In 2021, using the Instagram Basic Display API, the default number of media returned is 25. If the limit parameter is used, you exceed this default, but the maximum number of items returned is 100.
Here is an example cURL code snippet using the Me endpoint:
GET https://graph.instagram.com/me
?fields={fields}
&access_token={access-token}
&limit={required number of items}
I have something like 40 million TIFF documents, all 1-bit single page duplex. In about 40% of cases, the back image of these TIFFs is 'blank' and I'd like to remove them before I do a load to a CMS to reduce space requirements.
Is there a simple method to look at the data content of each page and delete it if it falls under a preset threshold, say 2% 'black'?
I'm technology agnostic on this one, but a C# solution would probably be the easiest to support. Problem is, I've no image manipulation experience so don't really know where to start.
Edit to add: The images are old scans and so are 'dirty', so this is not expected to be an exact science. The threshold would need to be set to avoid the chance of false positives.
You probably should:
open each image
iterate through its pages (using Bitmap.GetFrameCount / Bitmap.SelectActiveFrame methods)
access bits of each page (using Bitmap.LockBits method)
analyze contents of each page (simple loop)
if contents is worthwhile then copy data to another image (Bitmap.LockBits and a loop)
This task isn't particularly complex but will require some code to be written. This site contains some samples that you may search for using method names as keywords).
P.S. I assume that all of images can be successfully loaded into a System.Drawing.Bitmap.
You can do something like that with DotImage (disclaimer, I work for Atalasoft and have written most of the underlying classes that you'd be using). The code to do it will look something like this:
public void RemoveBlankPages(Stream source stm)
{
List<int> blanks = new List<int>();
if (GetBlankPages(stm, blanks)) {
// all pages blank - delete file? Skip? Your choice.
}
else {
// memory stream is convenient - maybe a temp file instead?
using (MemoryStream ostm = new MemoryStream()) {
// pulls out all the blanks and writes to the temp stream
stm.Seek(0, SeekOrigin.Begin);
RemoveBlanks(blanks, stm, ostm);
CopyStream(ostm, stm); // copies first stm to second, truncating at end
}
}
}
private bool GetBlankPages(Stream stm, List<int> blanks)
{
TiffDecoder decoder = new TiffDecoder();
ImageInfo info = decoder.GetImageInfo(stm);
for (int i=0; i < info.FrameCount; i++) {
try {
stm.Seek(0, SeekOrigin.Begin);
using (AtalaImage image = decoder.Read(stm, i, null)) {
if (IsBlankPage(image)) blanks.Add(i);
}
}
catch {
// bad file - skip? could also try to remove the bad page:
blanks.Add(i);
}
}
return blanks.Count == info.FrameCount;
}
private bool IsBlankPage(AtalaImage image)
{
// you might want to configure the command to do noise removal and black border
// removal (or not) first.
BlankPageDetectionCommand command = new BlankPageDetectionCommand();
BlankPageDetectionResults results = command.Apply(image) as BlankPageDetectionResults;
return results.IsImageBlank;
}
private void RemoveBlanks(List<int> blanks, Stream source, Stream dest)
{
// blanks needs to be sorted low to high, which it will be if generated from
// above
TiffDocument doc = new TiffDocument(source);
int totalRemoved = 0;
foreach (int page in blanks) {
doc.Pages.RemoveAt(page - totalRemoved);
totalRemoved++;
}
doc.Save(dest);
}
You should note that blank page detection is not as simple as "are all the pixels white(-ish)?" since scanning introduces all kinds of interesting artifacts. To get the BlankPageDetectionCommand, you would need the Document Imaging package.
Are you interested in shrinking the files or just want to avoid people wasting their time viewing blank pages? You can do a quick and dirty edit of the files to rid yourself of known blank pages by just patching the second IFD to be 0x00000000. Here's what I mean - TIFF files have a simple layout if you're just navigating through the pages:
TIFF Header (4 bytes)
First IFD offset (4 bytes - typically points to 0x00000008)
IFD:
Number of tags (2-bytes)
{individual TIFF tags} (12-bytes each)
Next IFD offset (4 bytes)
Just patch the "next IFD offset" to a value of 0x00000000 to "unlink" pages beyond the current one.