I can't implement pagination. Help implement pagination (VUETIFY or BOOTSTRAP-VUE) with url params - ecmascript-6

Help people! I am unable to implement pagination on vuetify and bootstrap-vue. I have a news portal. After login, it is redirected to the news page. News comes out in the form of a card (). It turns out to be a very long page.
I can't figure out how to make pagination.? How the data will change.? My news comes in the form of an object. I save the data on the vuex store and through the getters I output the data in the component.
Explain and show an example please. There is no good example on the internet. All sources link to the official site. And the official site is a bit confusing.

As far as I know, for news pages and the other massive data, it is recommended to use a lazy load approach. It means you could get about for example 50 objects from your API and when the user scrolled and reached the bottom then you have to load the 50 rest of the news.
Lazy loading is the practice of delaying load or initialization of resources or objects until they’re actually needed to improve performance and save system resources.
vuetify has the v-lazy component and you could use it for your cards.

Related

Search HTML Tables on Multiple Pages

Hello Stack Overflow Community!
I am making a directory of many thousand custom mods for a game using HTML tables. When I started this project, I thought one HTML page would be slow, but adequate for the ~4k files I was expecting. As I progressed, I realized there are tens of thousands of files I need to have in these tables, and let the user search though to find what they are missing to load up a new scenario. Each entry has about 20 text entries and a small image (~3KB). I only need to be able to search through one column.
I'm thinking of dividing the tables across several pages on my website to help loading speeds and improve overall organization. But then a user would have to navigate to each page, and perform a search there. This could take a while and be very cumbersome.
I'm not great at website programming. Can someone advise a way to allow the user to search through several web pages and tables from one location? Ideally this would jump to the location in the table on the new webpage, or maybe highlight the entry like the browser's search function does.
You can see my current setup here : https://www.loco-dat-directory.site/
Hopefully someone can point me in the right direction, as I'm quite confused now :-)
This would be my steps,
Copy all my info into an excel spredsheet, then convert that to json, then make that an array for javascript (myarray), then can make an input field, and on click an if statement if input == myarray[0].propertyName
if you want something more than an exact match, you'd need https://lodash.com/
in your project.
Hacky Solution
There is a browser tool, called TableCapture, to capture data from html tables and load into excel/spreadsheets - where you are basically deferring to spreadsheet software to manage the searching.
You would have to see if:
This type of tool would solve your problem - maybe you can pull each HTML page's contents manually, then merge these pages into a document with multiple "sheets", and then let people download the "spreadsheet" from your website.
If you do not take on the labor above and just tell other people to do it, then you'd have to see if you can teach the people how to perform the search and do this method on their own. eg. "download this plugin, use it on these pages, search"
Why your question is difficult to answer
The reason why it will be hard for people to answer you in stackoverflow.com (usually code solutions) is that you need a more complicated solution (in my opinion) than hard coded tables and html/css/javascript.
This type of situation is exactly why people use databases and APIs to accept requests ("term": "something") for information and deliver responses ( "results": [...] ).
Thank you everyone for your great advice. I wasn't aware most of these potential solutions existed, and it was good to see how other people were tackling problems of similar scope.
I've decided to go with DataTables for their built-in sorting and filtering : https://datatables.net/
I'm also going to use a javascript array with an input field on the main page to allow users to search for which pack their mod is in. This will lead them to separate pages on my site, each with a unique datatable for a mod pack. Separate pages will load up much quicker than one gigantic page trying to show everything.

Is there any better way to render the fetched feed data on webpage with infinite scrolling?

I am creating a webpage in ReactJS for post feed (with texts, images, videos) just like Reddit with infinite scrolling. I have created a single post component which will be provided with the required data. I am fetching the multiple posts from MySQL with axios. Also, I have implemented redux store in my project.
I have also added post voting. Currently, I am storing all the posts from db in redux store. If user upvotes or downvotes, that change will be in redux store as well as in database, and web-page is re-rendering the element at ease.
Is it feasible to use redux-store for this, as the data will be increased soon, maybe in millions and more ?
I previously used useState hook to store all the data. But with that I had issue of dynamic re-rendering, as I had to set state every time user votes.
If anyone has any efficient way, please help out.
Seems that this question goes far beyond just one topic. Let's break it down to the main pieces:
Client state. You say that you are currently using redux to store posts and update the number of upvotes as it changes. The thing is that this state is not actually a state in your case(or at least most of it). This is a common misconception to treat whatever data that is coming from API a state. In most cases it's not a state, it's a cache. And you need a tool that makes work with cache easier. I would suggest trying something like react-query or swr. This way you will avoid a lot of boilerplate code and hand off server data cache management to a library.
Infinite scrolling. There are a few things to consider here. First, you need to figure out how you are going to detect when to preload more posts. You can do it by using the IntersectionObserver. Or you can use some fance library from NPM that does it for you. Second, if you aim for millions of records, you need to think about virtualization. In a nutshell, it removes elements that are outside of the viewport from the DOM so browsers don't eat up all memory and die after some time of doomscrolling(that would be a nice feature tho). This article would be a good starting point: https://levelup.gitconnected.com/how-to-render-your-lists-faster-with-react-virtualization-5e327588c910.
Data source. You say that you are storing all posts in database but don't mention any API layer. If you are shooting for millions and this is not a project for just practicing your skills, I would suggest having an API between the client app and database. Here are some good questions where you can find out why it is not the best idea to connect to database directly from client: one, two.

updating the content of another webpage

I am a beginner web developer and working on a school project. My apologies for asking this basic question.
I am trying to create an online shopping store. What I am trying to do is when the user clicks on the checkout button, the shopping cart page gets updated with that element.
I don't know how to accomplish that. I really appreciate if someone can point me to a tutorial or provides some tips.
Your question is really abstract, however you will have to find the right way to communicate with a database...it might be php/MySQL with laravel or some other framework or preferably C# asp.net with entity framework(that's what I will go with its not mandatory some people like php) Then you need to learn some lambda or sql depending on which approach you would like to take (php/C#). You will probably need to get some kind of grid control for the cart asp.net has default one personally I find it confusing and hard to work with( there are many controls that can be downloaded freely without charge for non commercial products search google) then you would need to fill that grid with the List of objects that you will get from your database. If you get that far alone the next steps would be easy you will probably need some javascript for ajax calls to a webservice or either your backend so you can submit the orders. For the right tutorial there is none actually since your question covers several topics regarding database modeling, data fetching, backend development and front end development. You can start by getting yourself some kind of server (look for mssql for asp.net) and MySQL for php, then you should look for some kind of database modeling tutorial basic tables relationships 1-1 1-* *-1, after that you would need to do your design (from the question I assume that you already have one downloaded). Then if you got down to this step you should google the way for the right way of communicating with your database from the backend of your website (I advise entity framework its clean and easy) you can generally bind the list of objects directly do the grid that would represent your cart (I say grid but you can give it data template to look like html, not to be confused with regular tables). If you really get here I see no reason to keep coding for a school project in most places you can use all this to bachelor degree exam and pass with straight A. But if you want more details look into these videos they explain quite well what you want to do.
https://www.youtube.com/watch?v=s91pPLx_T3Q
https://www.youtube.com/watch?v=yYr0seXj7qA
https://www.youtube.com/watch?v=lFkXk5gHjSs
I am not great at JavaScript but I think your page would need to update a cookie then read that value from said cookie. I hope that's a good place to start!
You might also find "Creating a Shopping Cart using only HTML/JavaScript" helpful.

Inroducing service-workers on mobile site

I'm new to service workers. I want to integrate service workers in my site.My motive is to improve the performance of my website not making the website offline.Its a real estate website.
So what i have done till now is create modular templates of my site and store them in the cache.
for e.g. template1
<div>
<p>#data</p>
<div>
whenever a fetch occurs on my page i first call an api through ajax get the data and replace the #data variable in the cache response by actual api response and then i returned the new response to the browser.
Question 1:- So i want to know is that the right approach. for html template caching?
In the above approach i'm getting challenges like loops and conditional statements in my html.
Question 2:- Is there any way that i can cache the templates with loops and change them at run time?.
Question 3:- Say if i show the cached app-shell to the user initially, so is that going to effect my site's SEO ranking?.
Question 4:- I have to write new templates of the existing code, which means i have to maintain two codes one for service-workers and other for normal browsers which dont support service workers.Any solution to this?
Regards
That's a lot of questions and I don't think service workers are necessarily your best bet.
Question 1: Personally I recommend using a framework such as KnockoutJS, Angular, Polymer...etc for your HTML templates. These often have template caching built-in.
Question 2: Instead of the approach you are using whereby you are replacing the variables before 'sending them to the browser' most frameworks would use some form of data bindings which would take care of iterations and conditions within the browser.
Question 3: Caching the app-shell would have no effect on SEO and Google has been parsing JavaScript for a while; however personally I would recommend that the website's content loads without JavaScript and then JavaScript is only used to enhance the experience. This would be the same whether using the app-shell model or not.
Question 4: I do not understand your current setup and this would not be an ordinary scenario so you might have something wrong.
Service workers and the Cache API are ordinarily used to cache your static assets, usually fonts, css, JavaScript and HTML templates and should result in improved performance as there are less HTTP requests; but there are other ways to improve performance that will address all of your questions without the use of service workers.

Anyone ever tried to use Twitter to replace comments sections on web apps?

Here's the scenario I'm imagining.
Simple blog, users typically post comments in a comments form at the bottom of each blog article. Instead of that, using the Twitter API, pull tweets based on a hashtag. Base the hashtag on the article id (i.e. #site10201) where site is a prefix and the number is the article id.
Then provide a link to post a tweet using the hashtag., which would then get picked up in your twitter api pull.
I'm imagining horrible spam issues, but other than that, bad idea?
Has some drawbacks to more run-of-the-mill database systems:
Additional network overhead. Most self-hosted blogs would typically rely on database and blog being on the same server (physical or virtual) so db-lookup is fast (and reliable) compared to twitter API requests.
Caching issues. One host is only allowed X requests of twitter at a time (the next request is going to end up a 404), and how are you going to manage that from your website for a scenario which becomes steadily more complex as multiple articles are added? Presumably you need to authenticate so the easy-way out is a security liability. (The easy way out being to use JavaScript on the at the browser to perform the actual request, neatly circumvents the problem in 20/80 fashion.) Granted most blogs don't get that kind of traffic. ;)
You tie your precious or not so precious comments to the mercy of the fail whale. Which is kind of odd considering a self-hosted blog basically means you want to have such control in the first place by not using a service like blogger.
Is it possible to ensure unicity of hash tags --in the general case? What are you going to do if someone had the same bright idea, only took the name of the tag 5ms before you did? Would you end up pulling the drivel of someone else's blog comments rather than the brilliance you have come to expect from yours? ;)
Lesser point: you rely on others to have a twitter account. Anonymous replies are off the table.
TOS and other considerations that may be imposed on you by twitter, either now or in future. (2) is actually a major item of Twitter's TOS.