I'm trying to scrape the price from the first ticket here page using this xpath:
'.//*[#class="price"]/text()'
This works in the developer's console, but not when I run it in the scrapy shell using response.xpath. I have also tried to following in the shell:
'.//*[#class="initial"]/div[#class="price"]/text()'
and
'//*[#id="tVB901769989"]/div[1]/div[4]' (although I don't think that the id property can be used in the shell like this).
Is there something wrong with the xpaths that I've used, or is there some thing different with the way the page works? Any help would be appreciated. Thanks!
this happens because you are checking at different requests, the page you see doesn't have the information you need inside that file, but it gets it dynamically, in this case from: www.vividseats.com/javascript/tickets.shtml?productionId=1771684
There you can check the prices on json format, I think this is for one item:
{
"s":"Section 111",
"r":"8-22",
"q":"4",
"p":"692.00",
"i":"VB782041491",
"d":"111",
"n":"Zone Seating. The seller is committing to procure these tickets for you upon receipt of your order. After you place your order and your order is confirmed, we guarantee that your tickets will be within the listed zone
or section listed or one comparable and that you will receive these tickets in time for the event or
your money back. Orders exceeding four tickets may be split up into different rows within the requested
zone or section.",
"f":"0",
"l":"Section 111",
"g":"0",
"e":"0",
"h":"07/21/15",
"t":"0",
"v":"",
"c":"84352",
"z":"1",
"rhdn":"0",
"ind":"0",
"sd":"0"
}
where p contains the price.
Related
I have three seperate SPSS files with information about roughly 7500 hemicolectomy patients. One file contains the information about the hemicolectomies, the second one about other surgeries the patients have had during their lifetime and the last one contains information about their sick leaves during their lifetime.
I have merged (idnumber is the common variable) the files to a single SPSS document but i ran into a problem with filtering out the surgeries and sick leaves that have nothing to do with the hemicolectomy. I'm quite new to SPSS so the simplest way i could think of doing this is by somehow copying the hemicolectomy info to every case and then just using the date/time calculator to choose which sick leaves and surgeries to discard. Switching to wide format is unpractical due to the large number of unrelated surgeries and sick leaves: I'd have thousands of variables.
So basically I'd like to do the following:
IF idnumber = idnumber THEN variable1=variable1 AND variable2=variable2 etc
How would I go about doing this?
All help will be appreciated!
the IF command can only be used with one transformation:
IF [condition] [transformation].
Assuming both of your files are sorted by idnumber:
UPDATE file=[master_file_reference]
/file=[secondary_file_reference]
/BY idnumber.
EXECUTE.
The file reference can be made either by their dataset name, or by their full path.
More on the UPDATE command:
https://www.ibm.com/support/knowledgecenter/en/SSLVMB_24.0.0/spss/base/syn_update_examples.html
I cant comment yet, so Im sorry if I misunderstand the problem. I wouldve asked for clarification in the comments to the question... here goes...
So you have three sources of data which have dates (?) of hemicolectomies, one for each case; dates (?) of other surgeries, multiple for each case; and sickleaves even more for each case. Is that right?
I'd try solving the problem before matching all three file by matching the file that contains one observation per patient (presumably hemicolectomies) to the one with the second most observations (presumably other surgeries) per patient with the /table keyword:
MATCH FILES /FILE= 'surgeries.sav' /table = 'hemicolectomies.sav'
/by idnumber.
EXECUTE.
this will "fill up" the blank cells for each patient with the hemicolectomy data.
now use the datetime to check which surgeries "belong" to the hemicolectomies, thus reduce your data and match it to the sickleave data using the /table keyword again.
Seems like the easiest solution to me.
I am new to selenium webdriver python . I need to automate back end process of an e-commerce application ( order processing)
here the orders are classified to three types :
say A-type , B-type and C-type .
After successful order completion the orders will listed in an interface .
In all listed orders, its order type will mentioned like
"ORDER TYPE:A-type"
According to these three types certain scenarios will be executed. I need to identify this type. In a page there may be more than one A-type/B-type/C-type orders.
here the orders will be listing one by one will all user details along with order type. please help in this.
its html tag details:
order Type: A-type
I moved three files in My Drive to the trash. Then retrieved changes with Changes: list API (https://developers.google.com/drive/v2/reference/changes/list). It returned three changes, with ids 11607, 11608 and 11609. However, the largestChangeId field was 11608. When I made the API request again with startChangeId: 11608, it returned the two last changes. When I made the API request with startChangeId: 11609, it returned no result.
Is it expected? Or relying on change ids in such a way is not right?
In your case, it looks like you're exposed to a bug. I'm adding details of the changes list and how it should work in normal circumstances. Let's assume your latest change id was 11606 and you made three trashing operations.
GET changes?startChangeId=11607 should list:
11607
11608
11609
And the next time you are requesting changes.list, you should always increment the latest latestChangeId by 1.
In this case you need to request the following on your next poll.
GET changes?startChangeID=11610 and it will be returning an empty list.
Looking at the results of list, there is a lastModifyingUserName, but not a userid or other concrete reference to a user such that I can strongly verify that the file was last modified by me or someone else.
I can approximate this behavior using a string comparison of my user profile information, but this isn't an exact check.
I also looked at the timestamps, and timestamps for a file that was modified by me don't seem to line up, so it doesn't look like I can do this using timestamps either, which looks like a bug in and of itself, e.g.:
"modifiedByMeDate": "2013-01-31T02:25:26.738Z",
"modifiedDate": "2013-01-31T02:29:58.363Z",
Google are working on improving this so that there is consistency between the actor returned in the lastModifyingUserName field and the permission ID.
Right now I agree with you it is pretty impossible, sorry.
i'm working in a symfony project and using sfPropelPager to show a paged list of elements.
The problem is that with a great amount of data to list (i.e. thousands of registers) it makes a query to the database for each page to show!!!! That means about 100 extra queries in my case, and that is unacceptable.
Showing some of my code: the function that returns the pager object
$pager = new sfPropelPager('MyTable',sfConfig::get('sfPropelPagerLines'));
$c = new Criteria();
$c->add('my_table_field',$value);
$c->addDescendingOrderByColumn('date');
$pager->setCriteria($c);
$pager->init();
return $pager;
So, please, if you know a way to get all the results with only one query, it would be a great solution for my problem. Otherwise i must implement that list with an ajax call for every page the user wants to see
Thank you very much for your time.
I'm not sure to get your problem but, anyway, avoid the use of Criteria. Try to make queries with the ModelCriteria API: http://www.propelorm.org/reference/model-criteria.html.
For each paginated page, a query to the database will be done, this is the standard behavior for all pagers I know. If it's related to related objects (assuming you want to display information from relations), you may want to create a query that links those objects before to paginate, that way you'll get one query per page for all your data to display.
Read this doc for instance: http://www.propelorm.org/documentation/03-basic-crud.html#query_termination_methods
At last i did'nt get a solution for the problem, i had to implement the list via AJAX call, calling to a function that returns the requested page, so at the load of the page, no query for this list is slowing the user experience.
Thank you anyway to help me :)