I am able to successfully create a folder within the root of my Box account via the v2 API. However, if I immediately issue a search request for it, I get no results. If I wait for some period of time (maybe 20 mins) and then issue the search request, the folder I created is returned.
Is there some cacheing going on on the Box side? If so, is there a way to invalidate the cache via the API or some workaround for this?
Thanks!
What is going on is background backend processing of your file. Just like a new website won't show up in a google search until Google has time to 'learn' about the new website, Box's search engine has to process the file and add the text version of the contents to the Box search engine. Exactly how long it takes to be added depends on a lot of variables, including the size and format of the file.
You'll see pretty much the same behavior if you upload a large document to Box and then try to preview it immediately. Box goes off and does some magic to convert your file to a previewable format. Except in the case of the preview, the Box website gives you a little bit of feedback saying "Generating preview." The search bar doesn't tell you "adding new files to search index."
This is mostly because it is more important for Box to get your file and make sure we store it safely and let you know that Box has it. A few milliseconds later on we start working on processing your file for full text search and all the other processing that we do.
Related
i'm while in school project about malware injected documents.
key function will made by C i think.
but my idea is, if we can check the content inside of it,
only we have to do is just save the usual version of code, check the similarity and filter it?
so my question is...
can chrome extension possible to intervene in the download process and view the contents of the specified format files without execution?
(Both the conditions that are authorizable and 'not to be executed' are important.)
i googled it for few weeks, but only can i find was "how to make download function" kind of stuffs..
Hi guys I am trying to download a document from a swf link in ipaper
Please guide me on how can I download the book
Here is the link to the book which I want to convert to pdf or word and save
http://en-gage.kaplan.co.uk/LMS/content/live_content_v2/acca/exam_kits/2014-15/p6_fa2014/iPaper.swf
Your kind guidance in this regard would be appreciated.
Regards,
Muneeb
first you open the book in your browser with network capturing (in developer/s tools).
you should open many pages at diffrent locations with and without zoom
then look in the captured data.
you will see that for each new page you are opening, the browser asks for a new file (or files).
this means that there is a file for each page and with that file your browser is creating the image of the page. (usually there is one file for a page and it is some format of picture but I encountered base64 encoded picture and a picture cut into four pieces).
so we want to download and save all the files that are containing the book's pages.
now, usually there is a consistent pattern to the addresses of the files and there is some incrementing number in it (as we can see in the captured data the difference between following files), and knowing the number of pages in the book we can guess ourselves the remaining addresses till the end of the book (and of course download all the files programmatically in a for loop)
and we could stop here.
but sometimes the addresses are bit difficult to guess or we want the process to be more automatic.anyway we want to get programmatically the number of pages and all the addresses of the pages.
so we have to check how the browser knows that stuff. usually the browser downloads some files at the beginning and one of them contains the number of pages in the book (and potentially their address). we just have to check in the captured data and find that file to parse it in our proram.
at the end there is issue of security:
some websites try to protect their data one way or another (ussually using cookies or http authentication). but if your browser can access the data you just have to track how it does it and mimic it.
(if it is cookies the server will respond at some point with Set-Cookie: header. it could be that you have to log-in to view the book so you have to track also this process. usually it's via post messeges and cookies. if it is http authentication you will see something like Authorization: Basic in the request headers).
in your case the answer is simple:
(all the files names are relative to the main file directory: "http://en-gage.kaplan.co.uk/LMS/content/live_content_v2/acca/exam_kits/2014-15/p6_fa2014/")
there is a "manifest.zip" file that contains "pages.xml" file which contains the number of files and links to them. we can see that for each page there is a thumb, a small, and a large pictures so we want just the large ones.
you just need a program that will loop those addresses (from Paper/Pages/491287/Zoom.jpg to Paper/Pages/491968/Zoom.jpg).
finally you can merge all the jpg's to pdf.
I have a web app which consists of a frontend HTML page where the user enters some search parameters, a PHP processing file which takes the search parameters and uses an online Web API to retrieve relevant data and then passes it to another HTML file, where the data is displayed in a dynamic bar chart with D3. The PHP process creates a JSON file, data.json, which is imported via $.getJSON in the second HTML page.
This works fine in Chrome and IE but not in Firefox. If I clear the browser history and run a search, then everything works fine. Any subsequent searches I do do not show the new data, but the data from that original search after the history deletion, even though the data.json file is updating correctly.
So this makes me think that Firefox is for some reason storing the initial data.json data somehow and using that data each time the page is called.
I haven't included any code because it seems more about the semantics of Firefox than a problem with the code. It did seem to start doing this after I styled the site with Bootstrap/Bootswatch but I don't see why that would have any effect.
Any ideas why this is happening, please?!
I have an enterprise box account, and I was tasked with creating a crawler that would scan an account on box and save all meta information (including a direct link) in a local database. This works fine.
in PHP I have also built a function that downloads the documents (via the direct link I obtained from the api) and extracts readable text from them. This was working perfect a week ago, yesterday however this stopped working completely. I'm using the file_get_contents() function to download the file, and currently it only retrieves the document's file size rather than the document itself, which I find strange. I have tried CURL and I get the same result, it seems box is responding to my direct file requests with the file size instead of the actual file.
The files are ALL open access, so anyone with a direct link can download the file without logging in. I have also tried running this code on another server in another hosting company and I get the exact same result. I have tested my code by accessing other files from other locations (not box) and it works fine.
It's important to note that this was working fine just a week ago, but now it doesn't work at all. Nothing changed in between on my end, (that I know of). Anyone have an idea?
Title: Rotate Homepage Image (for website)- No longer works.
I am a physicist/wildlife artist with a website (I created in 2002) to display & market my artwork. I have set it up with an underlying (homepage) image map - having links to: "tigers", "leopards", "birds", artist info, etc., with the overlying image changing (swapping out) every time the user navigates to/from homepage. The links for each homepage have the same numerical coordinates and do not change locations from page to page, just the image changes. You can see my blank-page site at www.querryart.com. Note links below DO work.
The website was fabulous until last year. At that time my former webhost went out of business, and I changed to Jumpline.com. Since then, the commands which call canned subroutines do not work.
The routine which swaps out the image is named pid.cgi (stored in the cgi-bin).
Another one-line page-counter cgi routine I used at the end of each page called a canned program "count.cgi" which counted visitors to that page, incremented "hits" per page, and stored them in a table displayed only to me. This was a way I could determine the popularity of various images. This cgi routine also does not now work - giving me an error message on each page.
Anyway, I am lost without these routines (particularly the first one to swap out images). Is it progress that my Cadillac website has turned into an empty wagon? Hope someone can help. I'm not a programmer.
My first guess is that you may need to change the line(s) at the top of your CGI file in order for the server to process them. For example, if using Perl, #!/usr/bin/perl is a common directory, and so is #!/usr/local/bin/perl.
Oh, and have you set the permissions to 755?
For starters: http://www.querryart.com/cgi-bin/pid.cgi does not exist. You might want to make sure the file is uploaded to the correct place.
Make sure that your host supports CGI scripts.
Make sure, your CGI scripts are uploaded at the correct location according to the info from your host regarding the installation of CGI scripts.
Make sure the scripts are executable (chmod 755)
Make sure, that the scripts are calling the correct interpreter (as pointed out by Steve).
From a quick check at your web site, it looks like the scripts are not in the right place because the webserver gives a 404 - not found. when I try to get /cgi-bin/pid.cgi
Furthermore, the fact that the script takes an absolute path as a parameter (cfile=/home/querryar/httpdocs/cgi-bin/dicont.cnf) looks like a glaring security problem allowing access to any files in your account. You should really consider a different solution