Systematically compare keywords using Google Trends - google-apps-script

Is there a way to automatically get Google Trends comparison of two keywords?
I am able to downlaod relative interest for specific keywords and locations by using the gtrendsR R package. However, I would like to systematically compare two keywords.
This is important because of the two-step normalization procedure that Google applies to its relative interest index.
The picture below shows what I am aming at.

When you download data normally on R with google trends, your keywords should be normalized together

Related

Google apps script to insert multiple rows based on the number of variable column groups (generated by a Google form)

I'm an artist and have implemented an art show application for our local art group using Google Forms. The application allows for an artist to submit a variable number of painting entries (up to a maximum amount which varies from show to show). This presents a classic master (single instance of artist information such as name, phone, email, etc.) detail (multiple instances of painting information such as title, media, image, etc.) relationship. It's a classic problem that a relational database solves. However, the ease of creating a Google Form and ease in which folks can work with spreadsheet data makes a compelling case to solve this problem using sheets. As each painting detail is entered into the form (using conditional questions up to max entries) Google adds those detail cells horizontally extending the row. To date, I've managed to address the problem with a hodge-podge of very specific macros and other brute force methods to get this data lined up in columns so that it is workable (i.e., sort, slice/dice). I was about to attempt a crude generic script to try and solve this general problem but as I look at similar questions I see solutions that are 10 times more compact and efficient than anything I could cook up. This is by no means my specific problem but rather a general need by Google forms users who process master/detail information and end up with unmanageable data strung out in rows of endless variable lengths. If Google was smart they would build this master/detail feature in and gain a raft of new form users. Anyway, here is a view of the simulated captured form data: and the desired result.
https://docs.google.com/spreadsheets/d/1Lxuc6uCIkLXyx5evuWIEHULTAOwFwmjT627igC0JbfQ/edit?usp=sharing
Master - Details from Google Forms
The data in columnar format can now be processed with ease. My thinking was to make this generic so I could apply it to any number of form applications that ask for fixed and variable information. To do that I was going to set up variables for the starting cell of the detail data (in this case D or 4) ) the maximum number of detail clusters (in this case it was 5) and the number of cells in each
detail cluster (in this case 3). The master information (name info) gets repeated as rows are inserted for each cell cluster. Ideally, the last cell cluster on a row could be determined on the fly rather than specifying a max. The first cell in a detail cluster would be a required form field so it's absence would indicate the end of the detail clusters on a row.
I get weak in the knees when I think about using arrays and was leaning toward doing this with lots of copying and pasting by way of macros when I thought I might seek some help from those who do this with seeming ease.
I had similar situation: form with the start block, followed by repeated blocks of same questions. I successfully unwrapped it and pushed everything to BigQuery database by using Apps Script.
My guess is that you don't have tons of data, so you can keep everything in Google Sheets. You posted no code, but the strategy should be like this:
Use another sheet in the same spreadsheet for writing your data with the desired structure.
Keep the sheet that form writes into intact, you don't want to mess up GF->GS automation.
Use onFormSubmit event to process new rows that form writes into GS sheet.
Yes, you'll have to play with the arrays and use something like DetailsStartColumn(4) and DetailsWidth(3) to process horizontals blocks, detect filled/empty blocks and write them into your database on another sheet.
Arrays in Apps Script are fine, they are a great tool once you learn them, they are one of the reasons why I like JavaScript ;-)

Export my analytics data and put them in a database

I am looking to export the analytics data towards a database sql. Do you know one tools who could help me?
Do you know how I can see on Google analytics the traffic resulting from a particular URL??
Thank you all!
You have several options:
UI export: in the top/right corner of your reports you should have an option to download data in various formats (XLS, CSV...)
API: you can use the reporting API to get it out in a programmatic/automated way
One thing you won't be able to do with the free version no matter what you try:
Reconstruct the entire analytics data: whether with the UI or API, you're limited to querying only 7 dimensions maximum at a time (eg ga:country, ga:deviceCategory etc...), and cannot combine certain dimensions together (no official list available, it's trial and error to find out), whereas there are dozens of dimensions available.
So the question for you becomes:
How much resources do I want to invest into partially reverse-engineering Google Analytics vs. the value it brings me vs. what it would cost to get alternative analytics solutions?
I found a cloud based solution which exports raw google analytics data to MySQL database. Setup is simple, all you need to do is add your Google Analytics connection and a database to which the data needs to be exported.
MySQL, PostgreSQL, SQL Server and BigQuery are the supported destinations. It creates a few custom dimensions in your Google Analytics account and Tag in Google Tag Manager to send hits to Google Analytics. Data is exported from Google Analytics to the selected destination every day.
I have been using it for last three months now. Hope this helps.
Exporting the analytics data is a thorny one.
My understanding is that paid GA usage allows the export of all collected GA data.
But free usage does not.
For free usage, all you are going to be able to do, realistically, is to create a report over your GA data (in Data Studio or Google Sheets) that contains the rows and columns you want, and then collect this information and squirt it into a SQL table. You are also liable to come up against sampling.
Re traffic from particular URL, the news is better: just filter on Hostname and Page.

Largest practical datasets in Google spreadsheets?

I'm looking into using google sheets as some sort of aggregation solution for different data sources. It's reasonably easy to configure those data sources to output to a common google sheets and it's need to online for sharing. This sheet would act as my raw, un-treated data source. I would then have some dashboards/sub-tables based on that data.
Now, early tests seem to show I'm going to have to be careful about efficiency as it seems I'm pushing against the maximum 2 millions cells for spreadsheets (we're talking about 15-20k rows of data & 100 or so columns). Handling the data also seems to be pretty slow (regardless of cells limits), at least using formulas, even considering using arrays & avoiding vlookups etc...
My plan would be to create other documents (separate documents, not just adding tabs) & refer to the source data through import-range & using spreadsheet-key. Those would be using subsets of the data only required for each dashboards. This should allow me to create dashboard that would run faster than if setup directly off my big raw data file, or at least that's my thinking.
Am I embarking on a fool's errand here? Anyone has been looking into similarly large dataset on google docs? Basically trying to see if what I have in mind is even practical or not... If you have better ideas in terms of architecture please do share...
I ran into a similar issue once.
Using a multi layer approach like the one you suggested is indeed one method to work around this.
The spreadsheets themselves have no problem storing those two million cells, it's the displaying of all the data that is problematic, so accessing it via Import or scripts can be worthwhile.
Some other things I would consider:
How up to date does the data need to be? Import range is slow and can make the dashboard you create sluggish, maybe a scheduled import with the aggregation happening in Google Apps Script is a viable option.
At that point you might even want to consider using BigQuery for the data storage (and aggregation), whether you pull the data from another spreadsheet in this project or a database that will not run into any issues once you exceed 2 million elements would be future proof.
Alternatively you can use fusion tables* for the storage which are drive based , although I think you cannot run sophisticated SQL queries on it.
*: You probably need to enable them in Drive via right click > more > Connect more apps

full text search sql server (which stackoverflow turned down)

My application is a help (user assistance system) just like Online MSDN. but the only way to navigation is through SEARCH. Either the search is good or my system is dead.
I am looking for a third party search engine that can connect to database and provide
out of the box full text searching.
i have researched sql server 2008 ifts, lucene.net api, sql lite fts4 but all of them lack the ranking of result as good as google does.
em not expecting sth like google but i need best ranking search engine product.
Any suggestion or experience ?
maybe i should not go for third party search engine and use Lucene.NET or sql server 2008 FTS
but how can i establish good ranking for user provided Search query.. like
"how can i do upload excel file in XYZ interface" etc..
My short answer is discouraging: you won't be able to find do it yourself, even for an "okay" solution.
If you want good ranking:
Make your site friendly to search engines (which doesn't
necessarily mean that you have to open it to public, just make sure
search engines understand the URLs.)
Pay google to do it (look for google apps)
As you said, a search engine has to do two things at least. The first one is indexing, i.e., finding the documents out of the database based on queried keywords. The second is ranking, which sorts all documents and highlights the most relevant ones.
Ranking is one of the key factor of how good a search engine is. It's not surprising ranking is hard.
To give you an idea how hard it is, take the sentence in your question (i.e., "how can i do upload excel file in XYZ interface") for example. A search engine has to answer at least two questions to get good results:
Which keywords is most important? For example, XYZ might be more important than the word "how", and "can".
What's the possible meanings of the word? "Excel" can be microsoft excel, or Xcel energy(a company name excel)
There are a whole field in computer science dedicated to this problem. If you want some more evidences, take a quick look at ACM WWW.
One thing that is even more discouraging is that getting an "okay" solution would be difficult. The high level point is that the computer knows nothing about English, he has to read a lot to learn how to rank document.
Sadly, "a lot" means a lot of work -- For example, many textbooks suggest ranking documents based on TF/IDF, but getting a reasonable cut for these values requires crawling millions of web pages.
To summarize:
Ranking is hard.
Therefore it's not surprising that you won't be able to find any free, out-of-the-box solutions, and Google and Microsoft keep their ranking algorithms proprietary.
If you want to rank documents in a large database, get a search engine.
check out new feature for semantic search in sql server 2012:
http://msdn.microsoft.com/en-us/library/gg492075%28v=sql.110%29.aspx It won't be a silver bullet but might provides you a "out of the box" approach.

Best Way To Partial Search in SQL 2008

I've looked into SQL 2008's built-in Full-Text search, and also Lucene.NET.. but I don't think they'll do what I need to do. And I just want to make sure I'm building my program as efficient as possible.
So here's the dream. I want to have a single textbox on a page (like google) and allow the user to enter ANYTHING in. And based on their text, I will search 10's of tables to find what they're looking for.
Example. My database contains thousands of locations, each of which have multiple names / codes. Within each location, there is tonnes of data associated with them.
So if the user wants to display all the locations with the codes that contain "VM" ("CD-VM01", "CD-VM02", "CD-VM03", etc).. they should be able to. Or if they want to find all the locations in Toronto, they just type Toronto.. I want to make the search as easy as possible for people. (I've found that people don't like thinking)..
Plus it ends up being easier to scale to more search options if I can just search the database, and not have to add new fields to a search screen.
So if I don't use Full Text search (which I can't for partial) the only thing I can see that i'm left with is "Like" .. is that right? is that my only option?
I guess the question is, even if you were able to do this in the database, how would you handle it in the UI?
Most likely every search result from a different table will have different attributes that need to be displayed in order for the end user to understand what it is.
The Google search box only needs to search one thing - the content of web pages - and return one type of result - web page URLs and excerpts. Fundamentally you are trying to search for many different things, and so you'll most likely need to handle each case separately.
Alternatively, you could maintain a denormalized search table that contains only the search text and the common attributes you think need to be displayed with each hit. Maintain it either with a scheduled task or with triggers. You'd be able to use FTS on this as well.
Update
Some of the comments express some uncertainty over what SQL Server Full-Text Search is capable of. FTS can most definitely search for a single string anywhere within the text of a column, and can do other things as well (proximity search, free-text search, etc.) If you're just getting started then I'd recommend the TechNet pages on the subject, the documentation is very comprehensive.
In particular I'd suggest having a look at the section on Configuring Catalogs and the Getting Started page (Cole's Notes: you have to create catalogs - writing CONTAINS queries without them won't get you very far). Then take a look at the querying page. I'd be very surprised if you can't find answers to any and all of your questions there.
If you still can't get it to work, I would post a new question with the specifics of your problem - what you've tried, what you're expecting, and what's happening instead.
I believe Lucene does exactly what you're looking for. You can add an index from any external data source (including multiple database tables), then query that index and you'll get back pointers to the matching records.
The drawback is that unlike with full-text indexing, you're responsible for building and maintaining the index yourself.
You can see an example of how Lucene.NET might be used.
It appears that the easiest / quickest solution for this exact problem would be to use LIKE.