In my react application im using html select....where the dropdown also comes with searchbar.....to allow the user to first type and then select from the option displayed....the problem is when i select any option for the first time and then go to other options and try to select that first option then i can see that in the dropdown but im not able to select...its not getting selected.....this only happens for the option that was selected first.....in my application it occurred in other page also.....are there lot of issues with select tag ....is there any correct version i need to use or the correct stylesheet im using import 'react-select2-wrapper/css/select2.css';
Related
I'm new to stackoverflow (Hello World!). I have some basic understanding of JS, C++, HTML, and CSS and I have been looking in this and other forums but I am having problems figuring out this one, mostly because I don't know what this would be called (TLDR at the bottom):
Essentially, I would like build a chrome extension that extracts data from a website (in this case, copart - a website where people sell cars) and create a link from it that opens another window to one of three car evaluators (edmunds, kbb, nada). I fix cars as a hobby but it's a pain to have to input vehicle info over and over so I wanted to automatize the process as much as possible. Hopefully this will help others as well.
E.g. a generic link to edmunds is: https://www.edmunds.com/ford/escape/2018/appraisal-value/?vin=XXXXXXXXXXXXXX. I would like to know how to extract the make, model, year and VIN, in this case, from copart (Example copart page). On Kbb, e.g., all I see that can automatized is inputting the vin into the window and clicking "Go". Is there a way to have the plugin automatically select "VIN" and copy the VIN into the field while clicking the "Go" button?
Kbb
I know, a lot of questions. I'm also not quite sure what this would be called? A crawler? A scraper? A craper? :)
Either way, here the basic (TLDR) question:
How to create a chrome plugin that extracts data from one website, opens a URL using that data, and which then performs an action like switching a label, populating a textbox, and clicking a button on that URL?
I have only posed this question here so if there's a better place to put it, please let me know.
Mark
Extracting data from one website and searching more for scraped data in other website
1. For this project you can use combination of selenium and scrapy
2. Since both are dynamic page powered by javascript do need to check on security constraints
3. Can make use of spider under scrapy each spider with support of selenium
4. there is need of pressing Go button that can be achieved using selenium
I'm creating a VCL Application with Delpi 10.3 and want to support some web functionality by having the user enter the ISBN of a book into a TEdit component and from there passing/sending this value to a search field on this website: https://isbnsearch.org after which the website looks up the ISBN and displays the Author of the book. I want to somehow access the information (i.e Author) presented by the search result and again use it in my application.
This is my GUI, for a better idea of what I want to accomplish:
What code can I use for this? Any other feasible suggestions or approaches are acceptable.
When performing a search on that website, it simply loads a page with a specific URL query string...
https://isbnsearch.org/search?s=suess
The above example is when I search for "suess", so you can easily concatenate a search URL.
You can use any HTTP component, such as TIdHTTP, to load this search page, then use an HTML parser to scrape the page and read what you need. Much, much easier than trying to read through the TWebBrowser.
In the end, you won't actually display the HTML (I mean you can if you want to), but the idea is to read the data and display it in your own format.
On that specific page, start by locating the ul element with id searchresults. Then, each li element contains individual results. Unfortunately, this website uses pagination, and only shows 10 results per page. To do this, call this page again with another parameter &p=2 for the 2nd page, &p=3 for the 3rd page, and so on.
On the other hand, that is the worst way to acquire such information. What you should be doing is using a proper API which gives you machine-friendly data. The service you are referencing doesn't appear to have an option, but here's an example of one which does:
https://openlibrary.org/dev/docs/api/books - this also appears to provide you MUCH more information than the one you're using.
I am writing a program for managing an inventory. It serves up html based on records from a postresql database, or writes to the database using html forms.
Different functions (adding records, searching, etc.) are accessible using <a></a> tags or form submits, which in turn call functions using http.HandleFunc(), functions then generate queries, parse results and render these to html templates.
The search function renders query results to an html table. To keep the search results page ideally usable and uncluttered I intent to provide only the most relevant information there. However, since there are many more details stored in the database, I need a way to access that information too. In order to do that I wanted to have each table row clickable, displaying the details of the selected record in a status area at the bottom or side of the page for instance.
I could try to follow the pattern that works for running the other functions, that is use <a></a> tags and http.HandleFunc() to render new content but this isn't exactly what I want for a couple of reasons.
First: There should be no need to navigate away from the search result page to view the additional details; there are not so many details that a single record's full data should not be able to be rendered on the same page as the search results.
Second: I want the whole row clickable, not merely the text within a table cell, which is what the <a></a> tags get me.
Using the id returned from the database in an attribute, as in <div id="search-result-row-id-{{.ID}}"></div> I am able to work with individual records but I have yet to find a way to then capture a click in Go.
Before I run off and write this in javascript, does anyone know of a way to do this strictly in Go? I am not particularly adverse to using the tried-and-true js methods but I am curious to see if it could be done without it.
does anyone know of a way to do this strictly in Go?
As others have indicated in the comments, no, Go cannot capture the event in the browser.
For that you will need to use some JavaScript to send to the server (where Go runs) the web request for more information.
You could also push all the required information to the browser when you first serve the page and hide/show it based on CSS/JavaScript event but again, that's just regular web development and nothing to do with Go.
I have an admin panel which uses the admin.php?page=X system to condense all the admin features in to one easy to use page. Although, I also have a Pagination system on the admins 3rd page which also uses the '?' in the link. This doesn't seem to work? The link for instance is:
admin.php?page=3?pn=2
But, this doesn't show pagination '2' this just shows the original page 3 of the admin panel?
Is this because you cannot have more than 1 '?' in a link or is there a way to change this?
Multiple querystring parameters are separated by an ampersand &:
admin.php?page=3&pn=2
I am trying to simulate the functionality of a form in this website, but don't know exactly what the post URL looks like.
The link to the website is here:
selfservice.mypurdue.purdue.edu/prod/bwckschd.p_disp_dyn_sched >> then click on Spring2013. The code I am trying to replicate is the one that happens when the user clicks Course Search and selects CS from the subject list.
You can look at the HTML file to see the values they use in their POST command. How do I see what the values look like once the button is clicked, as I am trying to replicate this and set the variables to the same values. What I need is a URL to be shown with all of the variables set to their respective values. I understand this can be done with a GET command. Can someone tell me how to extract this URL for me so I can proceed?
I edited the page using chrome inspector and changed the form action to GET - this is the URL that was displayed.
https://selfservice.mypurdue.purdue.edu/prod/bwckschd.p_get_crse_unsec?term_in=201320&sel_subj=dummy&sel_day=dummy&sel_schd=dummy&sel_insm=dummy&sel_camp=dummy&sel_levl=dummy&sel_sess=dummy&sel_instr=dummy&sel_ptrm=dummy&sel_attr=dummy&sel_subj=CS&sel_crse=&sel_title=&sel_schd=%25&sel_from_cred=&sel_to_cred=&sel_camp=%25&sel_ptrm=%25&sel_instr=%25&sel_sess=%25&sel_attr=%25&begin_hh=0&begin_mi=0&begin_ap=a&end_hh=0&end_mi=0&end_ap=a
However, this URL dosen't resolve as the script is obviously expecting POST data.