Difference between the total steps and the sum of all the steps in the source screen - google-fit

On the source data screen from the link "See source data", there are the total steps and the list of all step entries like this screenshot.
We found that sometimes the total steps and the sum of the steps of all the entries, which we manually calculate, are different.
Here are my questions.
Could you let me know what causes this difference?
The entries shown on this screen are also different from the entries in the download data from Google (takeout data). Could you let me know why?
The sum of the steps in the download data is the same as the total steps in the source screen of Google Fit.
Thank you.

Related

extra spaces in report - Report Builder

Good Day. I hope everyone is well and safe. I am writing a report that is counting the number of times an event occurs, however when I run the report, it has a lot of extra spaces in it. I have attached a sample of the output. I have tried playing with cell padding, and table spacing, just cannot figure it out. Hoping one of the brilliant people here could offer a suggestion. It seems like the number of results is influencing the number of spaces, but I don't know what I don't know.
Again thank you for your support.
Thank you.
Sample number 1
Sample number 2
report builder screen
dataset
query

Google Sheets - Track and Log highest and lowest value of other (dynamic) cell

I am trying to build a sheet for paper trading Stocks and Crypto. The values in A are refreshed every 5 minutes. In B, I have the current Date and Time. At the moment, I have a workaround that sends a mail if a value meets a certain criteria. But it is not ideal. Another workaround would be to log every API request of a value as historical. But that would take every 5 minutes 2000 rows. I would like to log the values of a couple specified rows (trades). The Highest Value and Lowest value and the Timestamp of the logged value. Is there a way? I included a picture of how I would see it. I tried Google and the search within Stack Overflow without any result. The timestamp is a Bonus, the Log is the most important. I think it could be done within the App Script. But not sure.
Example:
Put the Question on Freelacer.com . This is the code i was looking for : Code added as picture enter image description here

Datastudio :: showing people that doesnt fill the google form (daily check in issues)

I'm pretty new to dataStudio and I'm trying to build a simple dashboard to track daily check in for students. The thing is, i want to show table report on datastudio based on students that doesnt fill the form on that day.
and i already figure out a few possible solution,
i've created another sheets on responses files, to show if the student fill the form or not in several range of date
like this, and a will filtered this into a new table that will be shown on data studio using date range filter. but i have difficulties because the date range filter only filter on row, not on column (dimensions)
i have quite a lot of students, so after several month the table will bulk up and do some massive processing lag upon opening. Maybe there are alternative way to do this more properly instead?
i dont know if i can solve this case using script, im trying to using js to solve this. still strugling, maybe you can give an advice, thank you
You are right, Data Studio is filtering on rows not on columns.
The best way would be to unpivot your table in Big Query.
Connecting google sheets to Big Query: https://support.google.com/docs/answer/9702507?hl=en
Unpivot in Big Query:
https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#unpivot_operator
and see the example:
SELECT * FROM Produce
UNPIVOT(sales FOR quarter IN (Q1, Q2, Q3, Q4))
with Q1 to Q4 would be your dates in the first row.

Program to help split and manage 2,000 column excel

I am building a web application that will run off of data that is produced for the public by a governmental agency. The issue is that the csv file that houses the data I need is a 2,000 column beast of a file. The file is what it is, I need to find the best way to take it and modify it. I know I need to break this data up into much smaller tables within MySQL, but I'm struggling with the best way to do this. I need to make this as easy as possible to replicate for next year when the data file is produced again (and every year after). I've searched for programs to help, and everything I've seen deals with a huge amount of rows, not columns. Has anyone else dealt with this problem before? Any ideas? I've spent the last week color coding columns in excel and moving data to new tabs, but this is time consuming, will be super difficult to replicate and I worry it leaves me open for copy and paste errors. I'm at a complete loss here!
Thank you in advance!
I suggest that you use functions in excel to give every column an automatic name "column1", "column2", "column3", etc.
After that import the entire csv file into MySQL.
Decide on which columns you want to group together into separate tables. This is the longest step and no program can help you manage this part.
Query your massive SQL table to get just the columns you want for each group. Export these queries to CSV and then import them as new tables in your database.
At the end, if you want, query all the columns you didn't put into separate groups. Make this a new table in the database and delete the original table to save on storage space.
Does this government csv file get updated and republished in the same format every time? If so you'll need to write a script to do all of the above automatically.

Google Refine and fetching data from freebase for a large data set to create a column from URL not working

I have a google refine project with 36k rows of data. I would like to add another column with fetching json data from freebase url. I was able to get it working on a small dataset but when i ran it on this project it took few hours to process and then most of the results were blank. I did get some results with data though. Is there a way to limit on amount of rows the data will be fetched or a better way of getting the data from the url.
Thank You!
If you're adding data from Freebase, you'd probably be better off using the "Add column from Freebase" rather than "Add column by fetching URL."
Facets are one of the most powerful Google Refine features and they can be used to control all kinds of things. In this case, you could use a facet to select a subset of your data and only do the fetch on that subset (and then repeat with a different subset).
The next version of Refine will include better error reporting on the results of URL fetches to help debug problems like this, but make sure that you're respecting all the limits of the remote site as far as total number of requests, requests per second, etc.