it seems i'm the only one in Googlesphere willing to write this program (in Python) that handles data from OPAP betting site and i'm missing the data (odds, game hours etc). The site provides no API but i was told to monitor the calls from the site's json link and job is well done. The closest i found is http://praktoreio.pamestoixima.gr/web/services/rs/iFlexBetting/retail/games/15104/0.json?shortTourn=true&startDrawNumber=686&endDrawNumber=686&sportId=s-441&marketIds=0&marketIds=0A&marketIds=1&marketIds=69&marketIds=68&marketIds=20&marketIds=21&marketIds=8&locale=en&brandId=defaultBrand&channelId=0 Still zero, I'd appreciate the right one
Related
I am creating a webpage in ReactJS for post feed (with texts, images, videos) just like Reddit with infinite scrolling. I have created a single post component which will be provided with the required data. I am fetching the multiple posts from MySQL with axios. Also, I have implemented redux store in my project.
I have also added post voting. Currently, I am storing all the posts from db in redux store. If user upvotes or downvotes, that change will be in redux store as well as in database, and web-page is re-rendering the element at ease.
Is it feasible to use redux-store for this, as the data will be increased soon, maybe in millions and more ?
I previously used useState hook to store all the data. But with that I had issue of dynamic re-rendering, as I had to set state every time user votes.
If anyone has any efficient way, please help out.
Seems that this question goes far beyond just one topic. Let's break it down to the main pieces:
Client state. You say that you are currently using redux to store posts and update the number of upvotes as it changes. The thing is that this state is not actually a state in your case(or at least most of it). This is a common misconception to treat whatever data that is coming from API a state. In most cases it's not a state, it's a cache. And you need a tool that makes work with cache easier. I would suggest trying something like react-query or swr. This way you will avoid a lot of boilerplate code and hand off server data cache management to a library.
Infinite scrolling. There are a few things to consider here. First, you need to figure out how you are going to detect when to preload more posts. You can do it by using the IntersectionObserver. Or you can use some fance library from NPM that does it for you. Second, if you aim for millions of records, you need to think about virtualization. In a nutshell, it removes elements that are outside of the viewport from the DOM so browsers don't eat up all memory and die after some time of doomscrolling(that would be a nice feature tho). This article would be a good starting point: https://levelup.gitconnected.com/how-to-render-your-lists-faster-with-react-virtualization-5e327588c910.
Data source. You say that you are storing all posts in database but don't mention any API layer. If you are shooting for millions and this is not a project for just practicing your skills, I would suggest having an API between the client app and database. Here are some good questions where you can find out why it is not the best idea to connect to database directly from client: one, two.
I tagged this question as "conceptual" because I'm not sure if creating Chrome Extension is the best way. In my opinion it's better to ask before spending few hours writing something and find out that some part is too much difficult or impossible.
The problem
I used to analyze my finance using csv files downloaded from bank account. But as it sometimes happens, my bank launched new GUI and csv feature disappeared. They do not know when and whether they will do it at all. So I have to grab transactions in some way and put them in CSV file.
Concept of solution
I think scrapping the page with transactions it's not good way, because whole data is looking like generated with totally random CSS classes and ids. I noticed that list of transactions is sent in JSON format by AJAX response. I analyzed that JSON and every interested field has name, so access to data is quite easy. Only one problem I see, it's that first JSON load shows only first 10 transactions. To load more I have to scroll down, then there is next AJAX request to the same URI, then next JSON load comes in response. So, if I want to get transactions from whole month I have to scroll few times down and my tool should catch first and also next responses.
I don't have experience with Chrome Extensions but they claim that if I know web technologies like JS, CSS, HTML it shouldn't be difficult to write simple extension. If I can take this JSON from AJAX response to my extension then generating CSV file shouldn't be problem.
The question
The main question is if my concept is possible to realize. Is there easy access to data loaded from AJAX response? If you see any better solution I'm open to suggestions.
you can create CSV file from Json in DOM (don't required to call any API).
Please refer this demo link
jsfiddle.net/hybrid13i/JXrwM/
if you care implemented this functionality in chrome extinction so you need to add download permission in manifist.json https://developer.chrome.com/extensions/downloads
I've been trying to resolve this problem for a while now, I even gave it a shot to rewrite the entire program but without success. The application is running on VueJS 2.3.3 and is supposed to be running on Chromium in combination with a Raspberry Pi (irrelevant information, for now).
We're working with several components which are being included in a single file, later on this file will be compiled using either gulp or npm run dev. When the instance of VueJS initializes, a request will be send using Vue Resource's $http option. This'll receive a json response with a size of around 30mb. This'll be saved in the data array, as seen here:
this.$http.get('<url>' + this.token)
.then((response) => {
this.properties = response.properties;
});
This data will later on be used for further actions, another thing that is worth mentioning is that the data is being refreshed every once in a while. Which is where I think the problem occurs, if I'm not refreshing the data every 5 minutes (can be longer too, really depends on the way I'm testing) the program runs fine. It's just that I want to refresh the data every once in a while to retrieve new information. The way of setting a timeout which I'm using is as following:
this.dataTimeout = setTimeout(this.refreshData, 300000);
Each (so called) property has at least 6 base64 images saved in it's JSON, which are later used to present to the user. Besides that, there is a name, address, and some other tiny bits of data. It doesn't sound all that wrong but I'm getting the feeling that each response makes the memory grow so intense that even a desktop is getting trouble running it.
Each 10 seconds a new property will be presented on the user's screen including the images, street, location, etc. I'm not sure if there is a memory leak in my code or if I'm forgetting something. A few questions pop up in my head:
Do I need to reset the response I'm getting from the server back to
null or undefined?
Could there be a leak in one of the plugins I'm using (Just VueResources)?
For how long can a VueJS instance stay alive, is there any recommended time to reload the entire program?
What thinks should I take in consideration in order to achieve this at all?
I've taken out the data renewal and put a demo project online, this can be seen right here. The main problem I'm having is that the browser just runs out of memory and shows us the amazing Aw snap! page from Chrome. I tried taking snapshots from the memory usage but it all seems fine, it just explodes randomly after a while.
Well, I don't know what really does your app, but are your 30Mb of data really useful / essential ? In JSON moreover ?
Maybe you don't need all this data, and you could just adapt the data to your needs. For example, keep your JSON store data, and retrieve your Base64 images by another way.
I don't understand why you store in memory images. Images are just useful for display purpose in my opinion.
So I think 30Mb is really huge. But maybe I'm wrong ?
By the way, I've tested with Firefox Nightly, no problem here. Doesn't seem to crash. Maybe I don't encounter the refresh call ?
I am using such MySQL request for measuring views count
UPDATE content SET views=views+1 WHERE id='$id'
For example if I want to check how many times some single page has been viewed I've just putting it on top of page code. Unfortunately I always receiving about 5-10x bigger amount than results in Google Analytics.
If I am correct one refresh should increase value in my data base about +1. Doesn't "Views" in Google Analytics works in the same way?
If e.g. Google Analytics provides me that single page has been viewed 100x times and my data base says it was e.g. 450x times. How such simple request could generate additional 350 views? And I don't mean visits or unique visits. Just regular views.
Is it possible that Google Analytics interprates such data in a little bit different way and my data base result is correct?
There are quite a few reasons why this could be occurring. The most usual culprit is bots and spiders. As soon as you use a third-party API like Google Analytics, or Facebook's API, you'll get their bots making hits to your page.
You need to examine each request in more detail. The user agent is a good place to start, although I do recommend researching this area further - discriminating between human and bot traffic is quite a deep subject.
In Google Analytics the data is provided by the user, for example:
A user view a page on your domain, now he is on charge to comunicate to Google The PageView, if something fails in the road, the data will no be included in the reports.
In the other case , the SQL sistem that you have is a Log Based Analytic, the data is collected by your system reducing the data collection failures.
If we see this in that way, that means taht some data can be missed with the slow conections and users that dont execute javascriopt (Adbloquers or bots), or the HTML page is not properly printed***.
Now 5x times more it's a huge discrepancy, in my experiences must be near 8-25% of discrepancy. (tested over transaction level, maybe in Pageview can be more)
What i recomend you is:
Save device, browser information, the ip, and some other metadata information that can be useful and dont forget the timesatmp, so in that way yo can isolate the problem, maybe are robots or users with adblock, in the worst case you code is not properly implemented ( located in the Footer as example)
*** i added this because one time i had a huge discrepancy, but it was a server error, the HTML code was not properly printed showing to the user a empty HTTP. The MYSQL was no so fast to save the information and process the HTML code. I notice it when the effort test (via Screaming frog) showed a lot of 500x errors. ( Wordpress Blog with no cache)
I'd like to start developing a "simple" game with HTML5 and I'm quite confused by the many resources I found online. I have a solid background in development, but in completely different environments (ironically, I started programming because I wanted to become a game developer, and it's the only thing I've never done in 13 years...).
The confusion derives from the fact that, although I know JavaScript very well and I have some knowledge of HTML5, I can't figure out how to mix what I know with all this new stuff. For example, here's what I was thinking of:
The game would be an implementation of chess. I have some simple "ready made" AI algorithm that I can reuse for single player; the purpose here is to learn HTML5 game development, so this part is not very important at the moment.
I'd like build a website around the game. For this I'd use a "regular" CMS, as I know many of them already and it would be faster to put it up.
Then I'd have the game itself, which, in its "offline" version, has nothing to do with the website, as, as far as I understand, it would live in a page by itself. This is the first question: how to make the Game aware of User's session? The login would be handled by the CMS (it should be much easier this way, as User Managememt is already implemented).
As a further step, I'd like to move the AI to the server. This is the second question: how do I make the game send player's actions to the Server, and how do I get the answer back?
Later on, I'd like to bring a PVP element to the game, i.e. one-against-one multiplayer (like the good old chess). This is the third question: how to send information from a client to another, and keep the conversation going on. For this, people recommended me to have a look at Node.js, but it's one more element that I can't figure out how to "glue" to the rest.
Here's an example of a single action in a PVP session, which already gives me a headache: Player 1 sends his move to the Server (how does the game talk to Node.js?). I'd need to identify the Game Id (where and how should I store it?), and make sure the player hasn't manually modified it, so it won't interfere with someone else's game (how?).
I'm aware that the whole thing, as I wrote it, is very messy, but that's precisely how I feel at the moment. I can't figure out where to start, therefore any suggestion is extremely welcome.
Too many things and probably in the wrong order.
A lot of the issues don't seem to me to be particularly related to HTML5 in the first instance.
Start with the obvious thing - you want a single page (basically a javascript application) that plays chess, so build that. If you can't build that then the rest is substantially irrelevant, if you can build (and I don't doubt that you can) then the rest is about building on that capability.
So we get to your first question - well at the point at which you load the page you will have the session, its a web page, like any other web page, so that's how you get the session. If you're offline then you've persisted that from when you were online by whatever means - presumably local storage.
You want to move the AI to the server? Ok, so make sure that the front end user interaction talks to an "interface" to record the player moves and retrieve the AI moves. Given this separation you can replaces the AI on the client with an ajax (although I'd expect the x to be json!) call to the server with the same parameters.
This gets better, if you want to do player to player you're just talking about routing through the server from one user/player to another user/player - the front end code doesn't have to change, just what the server does at the far end of the ajax call.
But for all this, take a step back and solve the problems one at a time - if you do that you should arrive where you want to go without driving yourself nuts trying to worry about a bucket full of problems that seem scary that you can probably easily solve one at a time and I'd start by getting your game to run, all on its own, in the browser.
About question one: You could maybe give the user a signed cookie. E.g. create a cookie that contains his userid or so and the SHA2 hash of his userid plus a secret, long salt (e.g. 32 bytes salt or so).
About question two: For exchanging stuff and calling remote functions, I'd use the RPC library dnode.
About question three: Use the same thing for calling methods between clients.
Client code (just an example):
DNode.connect(function (remote) {
this.newPeer = function(peer) {
peer.sendChatMessage("Hello!");
};
});
You don't have to use game IDs if you use dnode - just hand functions to the browser that are bound to the game. If you need IDs for some reason, use a UUID module to create long, random ones - they're unguessable.