I am a google drive user(not a API user or developer). I want to know is there any way to search inside the file contents. I know the filenames can be searched with search bar. But i couldn't find a way to search inside files.
For example,
I have a file Names.txt which contains
Oracle
Microsoft
IBM
How can i get,say Microsoft, just by searching?
I got this information
useContentAsIndexableText boolean Whether to use the content as indexable text. (Default: false)
from
https://developers.google.com/drive/v2/reference/files/insert
and nothing more helpful :p
Any idea, how to get the file content by searching?
Actually, i found one way to sort the files which contains the text from the link below.
https://support.google.com/drive/answer/2375114?hl=en&ref_topic=2463645
That is by including the search term in in double quotes.
So if i search with "Microsoft". Then only files that contains Microsoft will be visible.
I think that you need some kind of UI to do a full-text content search in your GDrive. There is several tools to do it.
First, you can do it on your desktop with DTSearch or MetaSearch
Secondly, you can use cloud based solution Findo
None of solutions fit me properly, so I decided to write my own cloud based document search engine - Ambar. As for now it's supports only Dropbox. But you can give it a try as well with manual uploads.
You can use "" in the search box, neverthless if you have a lot of pdfs, convert it to doc format, this way you'll be able to look up information in books.
Though I would recommend to hold your pdf files in your computer and analyze it with a pdf reader(For example PDF-XChange Viewer)
Related
Is there a file(s) I can read and decode to get the list of custom search engines?
People say you can copy/paste C:\Users\xxx\AppData\Local\Google\Chrome\User Data\Default\Web Data to copy your engines to another computer, but the file isn't in plantext, I'm not sure how to read it.
I'm referring to these:
I figured it out
Web Data is a .sqlite file
select * from keywords gets the search engines. Although chrome needs to be closed... otherwise the database is somehow locked and I can't query it (This is a big issue, if anyone knows how to solve it please comment)
If it's saying the db is locked while chrome is open, simply copy/paste Web Data to a temp file before reading it
I have recently downloaded my facebook archive, which is a very old account I started in 2009.
There is some conversations I would like to read, the main problem is that messages.html inside the zip weights 98 mo.
Unfortunately,neither mozilla or google chrome can open those 21109 lines of codes in a webview without crashing.
I could open the document with Notepad++, but it's just like searching for a needle in a haystack.
Could you help me please ?
Further to the LINUX comments, we can only assume you are trying to look (or search) inside the html file. You can use any good, text editor like: TextPad, EditPad, etc. You can also download "Unxutils" (not it is not mis-spelled) and use the Windows ports of grep/sed/awk/head/tail/cut etc. There maybe comments or answers posted to use Cygwin which work fine, but require the use of DLL libraries and such. The UnxUtils are stand-alone exe files are work right out of the box with no installation required.
If you are interested in getting some readable files for each conversation you can use the first part of this tutorial which generates csv files which are easily searchable.
http://openmachin.es/blog/facebook-messages
Is it possible to build a Chrome extension, that when installed or updated, automatically adds a list of words to the user's custom dictionary?
We use a custom-designed Chrome extension at my company, and essentially I'm looking for an easy way to synchronize everyone's spellchecking.
(it would be messy to have everyone download the custom text file and move to C:\Users\USER\AppData\Local\Google\Chrome\User Data\Default\Custom Dictionary.txt, or whatever the location is)
Thanks!
The best way to create your extension is probably with content scripts that detect user input fields and edit their input on the fly.
Chrome 45 and below does not have an API for dictionary access. Doesn't look like it's planned for either.
https://developer.chrome.com/extensions/api_index
However, if you can find the path they use to their dictionary, and if it's not just bytecode or the likes (SPOILER: it probably is), - then you can probably append your dictionary to it. Use the fileSystem API to edit it:
https://developer.chrome.com/apps/fileSystem
Also, if you choose the fileSystem-path instead of the content-script-path then know that extensions cannot use the fileSystem API. You need to create it as a packaged app.
https://developer.chrome.com/apps/about_apps
So, I've been trying to get a web page to display links to videos (over a symbolic link) dynamically (i.e., without hardcoding an <a></a> tag for each one) I have, and I think I may have found a solution, albeit a hacky one:
Video
Ignoring that this is a horrible way to do this, does anyone know how to format the following?:
I'm guessing there is an apache config file somewhere, but it is extremely hard to search for it as I do not know what it is called when files are just listed in this manner.
i'm basically looking to resize the widths of columns, and maybe even do some pretty-fication.
this is all running on my web/file server and is being accessed form my local machine.
This is what you're looking for:
http://perishablepress.com/better-default-directory-views-with-htaccess/
This tutorial details how directory listing by Apache can be modified to suit your taste using HTAccess file.
Using Apache HeaderName and ReadmeName directives and the module "mod_autoindex.c" you can add custom markup to your directory listing pages.
For displaying links to A/V and other files, look at my website: https://wrcraig.com/ApacheDirectoryDescriptions.
It goes beyond the default directory description, providing a spreadsheet to assist in creating detailed descriptions and exporting them in FancyIndex/AddDescription format for inclusion in .htaccess.
It also provides a menu driven BASH scripted alternative, using the FancyIndex descriptive data above (automatically adding A/V durations) to recursively populate a custom index.html while retaining the security features of .htaccess.
The site has examples of the input spreadsheet and both the FancyIndex output and the optional BASH scripted output.
The Problem
I have a 35mb PDF file with 130 pages that I need to put online so that people can print off different sections from it each week.
I host the PDF file on Amazon S3 now and have been told that the users don't like to have to wait on the whole file to download before they choose which pages they want to print.
I assume I am going to have to get creative and output the whole magazine to JPGs and get a neat viewer or find another service like ISSUU that doesn't suck.
The Requirements and Situation
I am given 130 single page PDF Files each week (All together this makes up The Magazine).
Users can browse the Magazine
Users can print a few pages.
Can Pay
Automated Process
Things I've tried
Google Docs Viewer - Get an Error, Sorry, we are unable to retrieve the document for viewing or you don't have permission to view the document.
ISSUU.com - They make my users log in to print. No way to automate the upload/conversion.
FlexPaper - Uses SWFTools (see next)
SWFTools - File is too complex error.
Hosting PDF File with an Image Preview of Cover - Users say having to download the whole file before viewing it is too slow. (I can't get new users. =()
Anyone have a solution to this? Or a fix for something I have tried already?
PDF documents can be optimized for downloading through the web, this process is known as PDF Linearization. If you have control over the PDF files you are going to use, you could try to optimize them as linearized PDF files. There are many tools that can help you on this task, just to name a few:
Ghostscript (GPL)
Amyuni PDF Converter (Commercial, Windows only, usual disclaimer applies)
Another option could be to split your file in sections and only deliver each section to its "owner". For the rest of the information, you can put bookmarks linking to the other sections, so that they can be retrieved also if needed. For example:
If the linearization was not enough and you do not have a way to know how to split the file, you could try to split it by page numbers and create bookmarks like these:
-Pages 1-100
-Pages 101-200
-Pages 201-300
...
-Pages 901-1000
-All pages*
The last bookmark is for the ambitious guy that wants to have the whole thing by all means.
And of course you can combine the two approaches and deliver each section as a linearized PDF.
Blankasaurus,
Based on what you've tried, it looks like you are willing to prep the document(s) or I wouldn't suggest this. See if it'll meet your needs... Download ColdFusion and install locally on your PC/VM. You can use CF's cfpdf function to automatically create "thumbnails" (you can set the size) of each of the pages without so much work. Then load it into your favorite gallery script with links to the individual PDFs. Convaluted, I know, but it shouldn't take more than 10 mins once you get the gallery script working.
I would recommend splitting the pdf into pages and then using a web based viewer to publish them online. FlexPaper has many open source tools such as pdf2json, pdftoimage to help out with the publishing. Have a look at our examples here:
http://flexpaper.devaldi.com/demo/