Construct a tree using arraycollection - actionscript-3

How can we construct like lynda course structure in flex using tree which having data provider a array collection ? Inside array collection i have two arrays also. I need the structure as per the lynda course preview. Can anyone help me out ? Please Refer the lynda course structure.
Refer the link -- http://www.lynda.com/Photoshop-tutorials/Design-Web-Getting-CSS-from-Photoshop/151161-2.html
Just like the screens shot

I just had a look at this Link and read again & again your text item! yet fail to understand where the Array comes in ??? this looks to me quiet plain sailing you would just have to decide what you like! I use Thumbnail tree set ups as well as normal Tree set ups as well as Linked module Tree set ups where as the last would I assume be the bet for you!
YES, I did recheck that again so here is one of many I use to view in action.
Please go to enter link description here select the Tab two(2)=Show Case & once you have closed the pop Up window select the Directory item (Featured Artists - EXAMPLE) that will open the Tree node just like in the Linda example and so on!
This Tree works with modules so selects different .swf file modules showing video, text, anything really independently as you can test for yourself by selecting Information, Img. Gallery etc.! I hope this will help and if code is required please let me know! regards aktell for WebFlashArtistry#gmail.com

Related

How to extract data and generate URL from it?

I'm new to stackoverflow (Hello World!). I have some basic understanding of JS, C++, HTML, and CSS and I have been looking in this and other forums but I am having problems figuring out this one, mostly because I don't know what this would be called (TLDR at the bottom):
Essentially, I would like build a chrome extension that extracts data from a website (in this case, copart - a website where people sell cars) and create a link from it that opens another window to one of three car evaluators (edmunds, kbb, nada). I fix cars as a hobby but it's a pain to have to input vehicle info over and over so I wanted to automatize the process as much as possible. Hopefully this will help others as well.
E.g. a generic link to edmunds is: https://www.edmunds.com/ford/escape/2018/appraisal-value/?vin=XXXXXXXXXXXXXX. I would like to know how to extract the make, model, year and VIN, in this case, from copart (Example copart page). On Kbb, e.g., all I see that can automatized is inputting the vin into the window and clicking "Go". Is there a way to have the plugin automatically select "VIN" and copy the VIN into the field while clicking the "Go" button?
Kbb
I know, a lot of questions. I'm also not quite sure what this would be called? A crawler? A scraper? A craper? :)
Either way, here the basic (TLDR) question:
How to create a chrome plugin that extracts data from one website, opens a URL using that data, and which then performs an action like switching a label, populating a textbox, and clicking a button on that URL?
I have only posed this question here so if there's a better place to put it, please let me know.
Mark
Extracting data from one website and searching more for scraped data in other website
1. For this project you can use combination of selenium and scrapy
2. Since both are dynamic page powered by javascript do need to check on security constraints
3. Can make use of spider under scrapy each spider with support of selenium
4. there is need of pressing Go button that can be achieved using selenium

How to best transfer a document to a SAPUI5 framwork?

I'd like to achieve the following and I'm looking for ideas. I have a document and I want to represent/transform this content in/to a nice SAPUI5 framework. My idea is the following: a split app with having the paragraph titles in the master view (plus a search function on top) and the respective content in the detail view.
I'd like to know from you if
a) you might want to share your ideas and hints on alternatives.
b) this can be achieved within one single file (i.e. all the code for the split app and document content in one html) and maybe using pure html code (xml also feasible) - against the background of easily handing a large amount of text available in html.
c) if you happen to have/know a reusable template.
Thanks in advance!
An interesting question. I went through a similar exercise once, re-presenting my site with UI5.
To your questions:
(a) I would think that the approach you suggest is a good one
(b) You can indeed include all the app in a single file, I do that often by using script templates, even with XML Views. You can see some examples in my sapui5bin repository, in particular in the SinglePageExamples folder. Have a look at this html file for example: https://github.com/qmacro/sapui5bin/blob/master/SinglePageExamples/SAP-Inside-Track-Sheffield-2014/end.html
What I would suggest is, rather than intermingle the document content and the app & view definitions, maintain the content of your document separately, for example, in XML or JSON, and use a client side model to load it in and bind the parts to the right places.

Export Excel file to create html webpages

I'd like to know how you would try to solve the following problem:
I have an excel-spreadsheet containing "linked information" (like a matrix-type) of business processes which need to be transformed to HTML websites.
Only certain parts of the spreadsheet should be exported.
The needed data consists of a hierarchical representations of certain categories that hold different components.
So for example category 1 consists of component A which has sub-components A1, A2 and so on.
The goal is to represent that single excel spreadsheet with html websites where the main-categories lead to pages with subcategories (always listing which subcategories they hold) and so on. Kind of like a process or business flow-chart.
Whenever something gets changed, added, removed within the spreadsheet I'd like to reflect this new information with the webpages accordingly.
The important part would be not having to edit several webpages but have everything rebuild at once - with the right structure.
My first thoughts were to define one XSD file to extract and transform the data with XSL and there create the final web structure. I'm not quite sure how time-intensive this would be and if I could actually have a satisfying outcome.
Maybe you have a better solution for me or you can point me to some link where something similar is accomplished.
I hope I could get my problem across.
Thanks for your time.
UPDATE
I made a simple version of my spreadsheet.
|*Sub*|*Description*|*Key*|
|SubName|some text| 11|
|SubName|some text| 11|
|SubName|some text| 21|
|SubName|some text| 22|
Here the "key"-column is needed to structure the final html layout where 11 and 12 belong to an even higher category 10 which later needs to be added to the result set. What also needs to be added is a "title-category" with the highest level of 1, 2 etc.
I want to reach a point where I can create an html webpage with the title categories being listed (just like headlines) and (on the same page) in some sort of rectangle frame one can see the next level of categories (here 10 and 20) which work as a link and take one to another webpage displaying category 10 and 20 now as headlines and have the sub-categories listed and clickable to reach the final, detailed table listing. So basically it's a top-to-bottom drill down of information.
I have three excel files with these title categories (for example: customers, orders, services)
Returning these three spreadsheets in one html webpage would be the goal. And from there one could click through to the detail pages. For now I'd be happy just to get one spreadsheet in order.
Has anyone got a good idea how I can:
a) write a schema-file to receive a proper xml file,
b) and of course turn the xml file to an html file.
if you can point me to some examples with a similar problem, I'd be happy as well.
thanks for your support.

URL Masking in .Net / HTML

I have a website in which I have many categories, many sub-categories within each one and many products within each of those. Since the URLs are very user-unfriendly (they contain a GUID!!!), I would like to use a method which I think is called URL Masking. For example instead of going to catalogue.aspx?ItemID=12343435323434243534, they would go to notpads.htm. This would display the same as going to catalogue.aspx?ItemID=12343435323434243534 would display, somehow.
I know I could do this by creating a file for each category / sub-category (individual products cannot be accessed individually as it is a wholesale site - customers cannot purchase directly from the site). This would be a lot of work as the server would have to update each relevant file whenever a category / sub-category / product visibility changes, or a description changes, a name changes... you get the idea...
I have tried using server-side includes but that doesn't like it when a .aspx file is specified in an html file.
I have also tried using an iframe set to 100% width / height and absolutely positioned left 0 and top 0. This works quite well, but I know there are reasons you should not use this method such as some search engines not coping with it well. I also notice that the title of the "parent" page (notepads.htm) is not the title set in the iframe (logically this is correct - but another issue I need to solve if I go ahead and use this method).
Can anyone suggest another way I could do this, or tell me whether I am going along the right lines by using iframes? Thanks.
Regards,
Richard
PS If this is the wrong name for what I am trying to do then please let me know what it actually is so I can rename / retag it.
Look into URL Rewrites. You can create a regular expression and map it to your true url. For example
http://mysite.com?product=banana
could map to
http://mysite.com?guid=lakjdsflkajkfj3lj3l4923892&asfd=9234983920894893
I believe you mean URL Rewriting.
IIS 7+ has a rewrite module built in that you can use for this kind of thing.
URL Rewriters solve the problem you are describing - When someone requests page A, display page B - in a general way.
But yours is not a general requirement. You seem to have a finite uuid-to-shortname mapping requirement. This is the kind of thing you could or should set up in your app, yourself, rather than inserting a new piece of machinery into your system.
Within a default .aspx page, You'd simply do a lookup on the shortname from the url in a persistent table stored somewhere, and then call Server.Transfer() to the uuid-named page associated to that shortname.
It should be easy to prototype this.

Parsing a website and getting the info I need

hi so I need to retrieve the url for the first article on a term I search up on nytimes.com
So if I search for Apple. This link would return the result
http://query.nytimes.com/search/sitesearch?query=Apple&srchst=cse
And you just replace Apple with the term you are searching for.
If you click on that link you would see that NYtimes ask you if you mean Apple Inc.
I want to get the url for this link, and go to it.
Then you will just get a lot of information on Apple Inc.
If you scroll down you will see the articles related to Apple.
So what I ultimately want is the URL of the first article on this page.
So I really do not know how to go about this. Do I use Java, or what do I use? Any help would be greatly appreciated and I would put a bounty on this later, but I need the answer ASAP.
Thanks
EDIT: Can we do this in Java?
You can use Python with the standard urllib module to fetch the pages and the great HTML parser BeautifulSoup to obtain the information you need from the pages.
From the documentation of BeautifulSoup, here's sample code that fetches a web page and extracts some info from it:
import urllib2
from BeautifulSoup import BeautifulSoup
page = urllib2.urlopen("http://www.icc-ccs.org/prc/piracyreport.php")
soup = BeautifulSoup(page)
for incident in soup('td', width="90%"):
where, linebreak, what = incident.contents[:3]
print where.strip()
print what.strip()
print
This this is a nice and detailed article on the topic.
You certainly can do it in Java. Look at the HttpURLConnection class. Basically, you give it a URL, call the connect function, and you get back an input stream with the contents of the page, i.e. HTML text. You can then process that and parse out whatever information you want.
You're facing two challenges in the project you are describing. The first, and probably really the lesser challenge, is figuring out the mechanics of how to connect to a web page and get hold of the text within your program. The second and probably bigger challenge will be to figure out exactly how to extract the information you want from that text. I'm not clear on the details of your requirements, but you're going to have to sort through a ton of text to find what you're looking for. Without actually looking at the NY Times site at the momemnt, I'm sure it has all sorts of decorations like pretty pictures and the company logo and headlines and so on, and then there are going to be menus and advertisements and all sorts of stuff. I sincerely doubt that the NY Times or almost any other commercial web site is going to return a search page that includes nothing but a link to the article you are interested in. Somehow your program will have to figure out that the first link is to the "subscribe on line" page, the second is to an advertisement, the third is to customer service, the fourth and fifth are additional advertisements, the sixth is to the home page, etc etc until you finally get to the one you're actually interested in. How will you identify the interesting link? There are probably headings or formatting that make it recognizable to a human being, but you use a lot of intuition to screen out the clutter that can be difficult to reproduce in a program.
Good luck!
You can do this in C# using the HTML Agility Pack, or using LINQ to XML if the site is valid XHTML. EDIT: It isn't valid XHTML; I checked.
The following (tested) code will get the URL of the first search result:
var doc = new HtmlWeb().Load(#"http://query.nytimes.com/search/sitesearch?query=Apple&srchst=cse");
var url = HtmlEntity.DeEntitize(doc.DocumentNode.Descendants("ul")
.First(ul => ul.Attributes["class"] != null
&& ul.Attributes["class"].Value == "results")
.Descendants("a")
.First()
.Attributes["href"].Value);
Note that if their website changes, this code might stop working.