recommended http response size [closed] - html

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am programming kind of social-network application and struggling with one particular decision. I have quite large json structure (300KB at least) for markers on a map and I wonder whether it is allright to render that into html or if I rather should use ajax to serve it by pieces. Is there any rule of thumb about recommended size of response (what is allright and what is already not)? Because the ajax solution on the other hand is more complicated and increases number of requests to my server.

What's probably most important in your situation is perceived performance (how fast the site feels to the user).
I can't provide a rule of thumb on file size, but I can tell you that your average user gets impatient if a pages appears to take more than 500ms to load.
It might be best to load something (i.e. your map) quickly and then to use ajax requests to progressively add your pieces (geographic markers). The improved perceived performance of things "loading" on the page as the user watches will outweigh the decrease in actual performance caused by making multiple calls to the server. Otherwise, your user will be staring at a blank screen for the second or so that it takes to load the entire html file.

Related

Would it be faster to use server-side or client-side pagination/filtering/searching? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
The current system that I am working on handles a variety of dataset sizes, most are around 100 but a handful of clients have 250,000 or more results. We need to handle a search across these results fields, pagination for varying page sizes up to 50, and filtering all results on a specific field.
Currently the server is setup to do all of these functions. Something to consider would be that a search would fire off a backend call, a column filter would fire off a backend call, etc. So lots of, most likely, faster calls to the backend.
The client could do these things on a cached large dataset, but it would probably be slower filtering/sorting when the dataset reaches the higher end of the spectrum.
Our primary considerations are speed and user experience. The backend approach would likely be faster & more frequent calls, but would cause lots of short load times and spinners for the user. The frontend would likely be a long initial load time and faster loads/data changes for the additional operations like filter/sort.
To those that have run into similar issues, what do you recommend? What were your concerns? Could you offer some good resources for this type of issue? Any and all assistance would be helpful.
PS sorry if this doesn't fit the standard code questions on SO, just looking for experienced help on this issue.
in case of large date you have to use server side sorting, search and pagination
for performance you have to cache your http calls if you are calling the same endpoints within a given time period a couple of times.
you can find online many example of caching HTTP calls using RXJS, using handy operators like shareReplay that can cache and replay data for every subscriber and that makes you avoid making many calls to the server

How do I prevent downloading media on the website? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have a website where I put music, but I do not want anyone downloading it, or it gets harder and just listening online, like YouTube.
Unfortunately, no. It is a bit like trying to prevent someone from recording an on-air broadcast. When you send video data over the internet to someone's player, they can simply store the information being sent to the player unless you obfuscate it and make it so that the player will only work under certain circumstances and will not share the data. This, by definition, is DRM.
What DRM attempts to do is control the reading of the data entirely, so that it can not be copied. This has varying degrees of success and rarely, if ever, works particularly well. It may keep honest people honest, but if you are sending someone data in a way that they can access it, measures to try to stop them from copying it are... difficult. The most advanced systems use special display drivers and encrypt the data right up to the point it is being displayed on the screen (HDCP), that way other software on the computer can't directly pull the information off the frame buffer being prepared for the screen.
There may be some ways you can mildly obfuscate the access to your video, but ultimately, if you send it in the clear, it is trivial for a knowledgeable viewer to store the datastream. If you use DRM, it is substantially harder, but still likely to be able to be worked around by a dedicated attacker.

What are the advantages and disadvantages of pretty printing JSON? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Pretty printing JSON will tend to make it heavier than those rendered without pretty print. Beyond that I can't think else of something between pretty printing or not.
Let's say you want to provide web services for a public RESTful Web API, will it affect server performance, round-trip time, etc.?
So again, what are the advantages and disadvantages of pretty printing JSON?
Advantages
Easier to read
Disadvantages
The size of the data will increase
There will be some computational overhead
Thinking about this for all of 4 more seconds, I can't add any more to those lists! :P
Tools can both versions just the same, but the pretty printed versions are more readable for humans.
Why doesn't everyone pretty print?
For the same reason people don't turn on their car lights during the day, even though it would improve their visibility:
People don't think about it: json.dump() works, move on to the next problem.
It's not the default, so it takes a tiny bit of manual work
People don't see it as something they should optimize for.
People tend to micro-optimize other things (JSON string size, car battery usage)

Is a 610kb image too large for download in a smartphone app? (over mobile data network) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm getting an image of size 610kb from an HTTP source and displaying it on my app main page But I feel like its too large. There are no alternatives - provider does not have any lower resolution images. It may eat away at users bandwidth if they refresh too often. How should I handle this?
EDIT - its a live space image of earth updated every 30 seconds (but I programmed the app to update every hour - i.e. it caches the image to the phone so it wont update unless 1 or more hours has passed
BUT there is a refresh button, the user can force update whenever they want
I've never heard of such limitation in WP. As long as your app is in the foreground, it can download large files without problem. 610 kb isn't a big deal and it can be downloaded in a matter of seconds.
What you should pay attention to, however, is to download your image and do any heavy post-processing asynchronously (in a background thread). Also, make sure that the user is alerted that a download is in progress so that they can wait for it.

Load time for a published web app is too slow [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
The execution time for the scripts is slower. Although this much in a browser is great.
And, The published web app's load time is highly intolerable, that contained junk ( Probably because the UI is done with UIBuilder and the spaces were taken in unicode characters or whatever ).
Two questions here.
For seamless responsiveness, Is that the script folks wished the complete javascript be loaded and run in the browser and there is nothing we could do about it ?
Any optimizing techniques we should look for.
I have found that hand rolling the html and delivering with HtmlService is significantly quicker than UIService. This obviously depends on your circumstances depending on how confident you are scripting your own validators and onEdit functions. But as a possible optimisation this is where I'd start.