currently ironing out a way to parse the data of a page: http://www.foundationfinder.ch/
i love to do it in Perl: Well - i am just musing which is the best way to do the job.
Guess that i am in front of a nice learning curve. ;) This task will give me some nice Perl lessions. At the moment it goes abit over my head...;-)
So here is a sample-page:
... and as i thought i can find all 790 resultpages within a certain range between Id= 0 and Id= 100000 i thought, that i can go the way with a loop:
http://www.foundationfinder.ch/ShowDetails.php?Id=11233&InterfaceLanguage=&Type=Html
http://www.foundationfinder.ch/ShowDetails.php?Id=927&InterfaceLanguage=1&Type=Html
http://www.foundationfinder.ch/ShowDetails.php?Id=949&InterfaceLanguage=1&Type=Html
http://www.foundationfinder.ch/ShowDetails.php?Id=20011&InterfaceLanguage=1&Type=Html
http://www.foundationfinder.ch/ShowDetails.php?Id=10579&InterfaceLanguage=1&Type=Html
i thought i can go the Perl-Way but i am not very very sure: I was trying to use LWP::UserAgent on the same URLs [see below] with different query arguments, and i am wondering if LWP::UserAgent provides a way for us to loop through the query arguments? I am not sure that LWP::UserAgent has a method for us to do that. Well - i sometimes heard that it is easier to use Mechanize. But is it really easier!?
BTW; But if i am going the PHP way i could do it with Curl - couldnt i!?
Here is my approach: I tried to figure it out. And i digged deeper in the Manpages and Howtos. We can have a loop constructing the URLs and use Curl - repeatedly
As noted above: here we have some resultpages;
http://www.foundationfinder.ch/ShowDetails.php?Id=11233&InterfaceLanguage=&Type=Html
http://www.foundationfinder.ch/ShowDetails.php?Id=927&InterfaceLanguage=1&Type=Html
Alternatively we can add a request_prepare handler that computes and add the query
arguments before we send out the request.
Again: What is aimed: i want to parse the data and afterwards i want to store it in a local MySQL-database
should i define a extern_uid !?
and go like this:
for my $i (0..10000) {
$ua->get('http://www.foundationfinder.ch/ShowDetails.php?Id=', id => 21, extern_uid => $i);
# process reply
}
Well but now i get stuck- i need help - can i do the job like this!?
regards
zero
Dont do it like this. Use HTTP live headers (Firefox Plugin) or eqv. to see what the javasript does behind the scenes while you select what you need from here to get to that page (with the table).
To get the data from the table, use HTML::TableExtract or HTML::TreeBuilder::XPath if you want to use XPath
If you do want to iterate over the queries, just create another var:
my $a = 'http://www.foundationfinder.ch/ShowDetails.php?Id=' . $q . '&InterfaceLanguage=&Type=Html';
and increment $q as you go, make sure the page is valid before trying to load it with get
Related
I have written a programme which merges two 1D arrays containing names. I print the list of arr1, arr2 and arr3.
I am using Lazarus Free Pascal v. 1.0.14 . I was wondering if anyone knows how to break the results in the dos-like window because the list is so long that I can only see the last few names in the returned results. The rest go by too fast to read.
I know I can save the resuls to file and I also use the delay command, but would like to know if there is a way to somehow break the results or slow them down or even edit the output console?
I appreciate your help.
This isn't really a programming question, because your console application should output the values without pause. Otherwise your program would become useless if you ever wanted it to run as part of another pipeline in an automated fashion.
Instead you need a tool that you wrap around your program to paginate the output if, and when, you so desire. Such tools are known as terminal pagers and the basic one that ships with Windows is called more. You execute your program and pipe the output to the more program. Like this:
C:\SomeDir>MyProject.exe <input_args> | more
You can change the code of your loop in the following way:
say you print the results by the followng loop:
for i:=0 to 250 do
WriteLn(ArrUnited[i]);
you can replace it with:
for i:=0 to 250 do
begin
WriteLn(ArrUnited[i]);
if (i mod 25) = 24 then //the code will wait for the user pressing Enter every 25 rows
ReadLn;
end;
For the future please! post MCVE in your questions otherwise everyone has to guess what your code is.
Forgive me, I'm very new to using REST.
Currently I'm using SP2013 Odata (_api/web/lists/getbytitle('<list_name>')/items?) to get the contents of a list. The list has 199 items in it so I need to call it twice and each time ask for a different set of items. I figured I could do this by calling:
_api/web/lists/getbytitle('<list_name>')/items?$skip=100&$top=100
each time changing however many I need to skip. The problem is this only ever returns the first 100 items. Is there something I'm doing wrong or is $skip broken in the OData service?
Is there a better way to iterate through REST calls, assuming this way doesn't work or isn't practical?
I'm using the JSon protocol with the Accept Header equaling application/json;odata=verbose
I suppose the $top=100 isn't really necessary
Edit: I've looked it up more and, I'm not entirely sure of the terms here, but using $skip works fine if you're using the method introduced with SharePoint 2010, i.e., _vti_bin/ListData.svc/<list_name>?$skip=100
Actually, funny enough, the old way doesn't set a 100 item limit on returns. So skip isn't even necessary. But, if you'd like to only return a certain segment of data, you'd have to do something like:
_vti_bin/ListData.svc/<list_name>?$skip=x&$top=(x+y)
where each time through the loop you would have something like x+=y
You can either use the old method which I described above, or check out my answer below for an explanation of how to do this using SP2013 OData
Alright, I've figured it out. $skip isn't a command which is meant to be used at the items? level. It works only at the lists? level. But, there's a way to do this, actually much easier than what I wanted to do.
If you just want all the data
In the returned data, assuming the list you are calling holds more than 100 items, there will be a __next at d/__next (assuming you are using json). This __next (it is a double underscorce, keep that in mind. I had a few problems at first because I was trying to get d/_next which never returned anything) is the right URL to get the next set of items. __next will only ever be a value if there is another set of items available to get.
I ended up creating a RequestURL variable which was initially set to to original request, but was changed to d/__next at the end of the loop. Then the loop went and checked if the RequestURL was not empty before going inside the loop.
Forgive my lack of code, I'm using SharePoint Designer 2013 to make this, and the syntax isn't horribly descriptive.
If you'd only like a small set of data
There's probably a few situations where you would only want x amount of rows from your list each time you go through the loop and that's real easy to do as well.
if you just add a $top=x parameter to your request, the __next URL that comes back with the response will give you the next x rows from your list. Eventually when there are no rows left to return __next won't be returned with the response.
Don't forget that in order to use __next you need to have a
$skiptoken=Paged=TRUE
in the url as well.
I'm extremely new at Perl and trying to prove I can pick it up quickly. What I was asked to do is add a string as an argument on my command line, and then feed that into my script. From there it is supposed to search a MySQL table I've made for matches in one column, and spit the contents of another column into an array. It was suggested I used the Getops:Std but I'm uncertain how exactly to do that, and if that's the best technique.
For example: I have a MySQL table with car manufacturers, and car models. I want to run, Perl myscript.pl Ford, and then have it shoot me back an array with
Mustang
Escape
Focus
But I'm uncertain how to get that string input in the first place. Would Getops:Std be best? If so how would it be written? I'm picking this up quickly, but I've been at it less than a week, so the simpler the explanation, the better.
Edit: Basically I was confused why it was suggested I should use GetOpts::Std for this. It seems to be completely inappropriate for what I'm trying to do.
GetOpts::Std is overkill for this. Your command line arguments are in #ARGV. If you haven't been able to work that out after a week, then you need better references for Perl.
The first argument will be in $ARGV[0], the second in $ARGV[1] , and so on.
You should check the DBI module. Google for some tutorial out there.
Then try to write your script and post more specific questions with some code if you need more help.
good moring.
first of all. This is the most impressive community i ever saw!
Well several days i mused about the three-folded job of
a. getting
b. parsing
c. storing a number of pages.
Two days ago i thought that getting the pages would be the major-task. No this isnt the case - i guess that the parser-job would be a heroic task. Each of the pages that are intended to be parsed is a png-image.
So the question is - after getting all them. How to parse them!? This seems to be the issue. Guess that there are some perl-modules out there - that can help in doing this...
Well - i think that this job only can be done with some OCR embedded! Question: is there a perl-module that can be use here to support this task:
BTW: see the result-pages.
BTW;: and as i thought i can find all 790 resultpages within a certain range between
Id= 0 and Id= 100000 i thought, that i can go the way with a loop:
http://www.foundationfinder.ch/ShowDetails.php?Id=11233&InterfaceLanguage=&Type=Html
http://www.foundationfinder.ch/ShowDetails.php?Id=927&InterfaceLanguage=1&Type=Html
http://www.foundationfinder.ch/ShowDetails.php?Id=949&InterfaceLanguage=1&Type=Html
http://www.foundationfinder.ch/ShowDetails.php?Id=20011&InterfaceLanguage=1&Type=Html
http://www.foundationfinder.ch/ShowDetails.php?Id=10579&InterfaceLanguage=1&Type=Html
i thought i can go the Perl-Way but i am not very very sure:
I was trying to use LWP::UserAgent on the same URLs [see below]
with different query arguments, and i am wondering if LWP::UserAgent provides a
way for us to loop through the query arguments? I am not sure that LWP::UserAgent has a method for us to do that. Well - i sometimes heard that it is easier to use Mechanize. But is it really easier!?
But - to be frank; The first task " GETTING all the pages is not very difficult - if we compare this task with the parsing... How can this be done!?
Any ideas - suggestions -
look forward to hear from you...
zero
You do not need a Perl module, you only need the system function.
system qw[ tesseract.exe foo.png foo.txt ];
my $text = read_file('foo.txt');
You may need to preprocess the images to help Tesseract, say using ImageMagick like:
system qw[ convert.exe -resize 200% image.jpg foo.png ];
I want to be able to parse specific content from a website into a mySQL database. For example, on site http://allrecipes.com/Recipe/Fluffy-Pancakes-2/Detail.aspx I want to parse into my database (which has a table with columns RecipeName, Ingredients 1-10).
So basically my database will contain the name and all the ingredients for that recipe. There is no need to edit the content, simply parse them in as is (i.e. 3/4 cup milk) since i am using character in my database.
How exactly do I go about doing this? I was looking a pre-built parsers and it seems its tough to find one that's easy to use since I am fairly new to programming. Of course, I can manually enter values in but I want to parse them in.
Would it be possible to just parse this content and write a file that has a RecipieName, Ingredient string which I can then parse into my database? Or should I just do it directly into the database? I am unsure as to how to connect a database to a parser also directly, but I might be able to find some information online.
Basically, I am looking for help on how to exactly go about doing this since I am not very well versed in programming and this seems to be a lot more complicated than it might be.
I am using Java as my main language right now, although I can't say I am very good at it. But I should be able to understand the basic concepts.
Any suggestions on what parser to use or how to do this?
Thanks!
This is how I would do it in PHP. This is almost certainly NOT the most efficient way to do it, nor has it been debugged.
function parseHTML($rawHTML){
$startPosition = strpos($rawHTML,'<div class="ingredients"'); //Find the position of the beginning of the ingredients list, return the character number.
$endPosition = strpos($rawHTML,'</div>',$startPosition); //Find the position of the end of the ingredients list, begin searching from the beginning of the list (found in step 1)
$relevantPart = substr($rawHTML,$startPosition,$endPosition); //Isolate the ingredients list
$parsedString = strip_tags($relevantPart); //Strip the HTML tags off of the ingredients list
return $parsedString;
}
Still to be done: You say you have a mySQL database with 10 separate ingredients columns. This code outputs everything as one big string. You would have to change the strip_tags($relevantPart) function to strip_tags($relevantPart,"<li>"). That would let the <li> tags through. Then, you would have to loop through every <li> tag, performing a similar function to this. It shouldn't be too hard, but I don't feel comfortable writing it with no functioning PHP server.