I'm wondering if there is a Twitch app/website out there that will give me a list of all the vod IDs for past broadcasts that exist for a specified Twitch channel. I use ReChat to download chat logs so I can search for moments I want to revisit from past streams when I don't remember on which stream they occurred, but I'm finding it tedious to copy and paste each VOD ID one by one.
I'm not a dev myself but I know there is something in the JSON API that makes this possible - just don't know how to use it so I'm wondering if someone else has set this up anywhere on the Internet. Thanks for everyone's help!
So this took me way too long to figure out, I still don't know how to do proper url redirect authentication for users using your application, but if you just want a local, or server to server python script then here is how to do it with the "new twitch api". Hope it helps someone out there.
import requests
import json
## Its the name you see when you browse to the twitch url of the streamer
USER_ID = "<USER_ID_NAME_YOU_WANT_THE_VIDEOS_FROM>"
## First setup your application on your dashboard.
## here: https://dev.twitch.tv/console
## then click "Register Your Application" on the right hand side.
## For the oauth redirect just write: http://localhost
## Make note of your Client ID
## Make note of your Client Secret
CLIENT_ID = "<YOUR_CLIENT_ID>"
SECRET = "<YOUR_CLIENT_SECRET_CODE>"
## First get a local access token.
secretKeyURL = "https://id.twitch.tv/oauth2/token?client_id={}&client_secret={}&grant_type=client_credentials".format(CLIENT_ID, SECRET)
responseA = requests.post(secretKeyURL)
accessTokenData = responseA.json()
## Then figure out the user id.
userIDURL = "https://api.twitch.tv/helix/users?login=%s"%USER_ID
responseB = requests.get(userIDURL, headers={"Client-ID":CLIENT_ID,
'Authorization': "Bearer "+accessTokenData["access_token"]})
userID = responseB.json()["data"][0]["id"]
## Now you can request the video clip data.
findVideoURL = "https://api.twitch.tv/helix/videos?user_id=%s"%userID
responseC= requests.get(findVideoURL, headers={"Client-ID":CLIENT_ID,
'Authorization': "Bearer "+accessTokenData["access_token"]})
print ( json.dumps( responseC.json(), indent = 4) )
I know you can get 100 from GQL.
You could make a POST request to: https://gql.twitch.tv/gql
With
PostData = [{"operationName":"FilterableVideoTower_Videos","variables":{"limit":100,"channelOwnerLogin":"usernametogetvideos","broadcastType":null,"videoSort":"TIME","cursor":"MTQ1"},"extensions":{"persistedQuery":{"version":1,"sha256Hash":"2023a089fca2860c46dcdeb37b2ab2b60899b52cca1bfa4e720b260216ec2dc6"}}}]
You also require a Client-Id header. Obtainable by going to Twitch on a browser and simply copying your own inside the network developer tool.
It will respond with the entire VOD information for 100 videos.
This python script will output the past broadcasts vod ids of a specific user (Using the new Twitch API v5).
import requests
import json
r = requests.get("https://api.twitch.tv/helix/videos?user_id=USERID&type=archive", headers={"Client-ID":"CLIENTID"})
j = json.loads(r.text)
for vod in j['data']:
print(vod['id'])
You need to replace USERID with an actual user id. To obtain the user id of a streamer, an api call to a specific vod will help: https://api.twitch.tv/helix/videos?id=VODID. The response will include a user_id.
CLIENTID also needs to be replaced. You can obtain it by registering your application at Twitch Developers.
Related
I am reading around here and I am seeing multiple messages about the /pages endpoint that is not working a expected
It seems that the OneNote APIs (MS Graph or Office365) are not returning all the pages that the user can see. In particular recent pages are not shown as available.
This message is for those of you who work for Microsoft and who keep an eye on this forum. Please if you have any explanation or workaround for this we would like to hear about it.
If this is work in progress we would also like to know when the APIs can be considered stable and reliable enough to consider them OK for production use
Update:
Permissions or scopes
scopes=[
"Notes.Read",
"Notes.Read.All",
"Notes.ReadWrite",
]
This is for a device authorization flow, the device is acting as a Microsoft Online account. The app is registered to Azure as personal app but the enterprise one does the same
The authorization process is described here
What type of app/authentication flow should I select to read my cloud OneNote content using a Python script and a personal Microsoft account?
After that I am using this endpoint to get the notebooks
https://graph.microsoft.com/v1.0/users/user-id/onenote/notebooks
from the returned json I pick the endpoint for the notebook I want to read and I access the endpoint the link stored in notebook['sectionsUrl']. This call returns a sections json
From this I pick the section I want and I access the link stored in section['pagesUrl']
Each call returns the expected info excepting the last one, when I get an arbitrary low number of pages in the section I want to explore. There is nothing wrong with the format of the info, it is just incomplete or not up to date
Not sure if this is related but when I try to access the pages in a section from MS Graph Explored I am seeing the same behavior (not all the pages are reported). This is a shared notebook and I am using the owner account for all the above so it should not be a permission problem
from msal import PublicClientApplication
import requests
endpoint= "https://graph.microsoft.com/v1.0/me/onenote"
authority = "https://login.microsoftonline.com/consumers"
app=PublicClientApplication(client_id=client_id, authority=authority)
flow = app.initiate_device_flow(scopes=scopes)
# there is an interactive part here that I automated using selenium, you
# are supposed to ouse a link to enter a code and then autorize the
# device; code not shown
result = app.acquire_token_by_device_flow(flow)
token= result['access_token']
headers={'Authorization': 'Bearer ' + token}
endpoint= "https://graph.microsoft.com/v1.0/users/c5af8759-4785-4abf-9434-xxxxxxxxx/onenote/notebooks"
notebooks = requests.get(endpoint,headers=headers).json()
for notebook in notebooks['value']:
print(notebook['displayName'])
print(notebook['sectionsUrl'])
print(notebook['sectionGroupsUrl'])
# I pick a certain notebook
section=[section for section in sections if section['displayName']=="Test"][0]
endpoint=notebook['sectionsUrl']
pages=requests.get(endpoint,headers=headers).json()
for page in pages['value']:
print(page['title'])
Update2
If I use this endpoint
https://graph.microsoft.com/v1.0/users/user-id/onenote/sections/section-id/pages
I would expect to get the complete list of pages for that section.
That is not working
After reading again and again the docs I my understanding is that the approach is to
call https://graph.microsoft.com/v1.0/users/user-id/onenote/pages$fiter or search etc etc
I this correct?
Also I vaguely remember there is a way to search for a section and have it expanded so that the search returs the children too.
Am I close to understanding this?
Thank you
MM
I'm not well versed with web techniques and would like to know if there's a way - an idea would be to use setWebhook - to make a telegram bot do simple stuff (like simply repeat the same message over and over again whenever someone sends it a message) without setting up a server.
I think there might be no way around it because I need to parse the JSON object to get the chat_id to be able to send messages... but I'm hoping someone here might know a way.
e.g.
https://api.telegram.org/bot<token>/setWebHook?url=https://api.telegram.org/bot<token>/sendMessage?text=Hello%26chat_id=<somehow get the chat_id>
I've tested it with a hard-coded chat id and it works... but of course it'll always only send messages to that same chat, regardless of where it received the message.
Here is a very simple Python bot example, you can run this on your PC no need for a server.
import requests
import json
from time import sleep
# This will mark the last update we've checked
last_update = 0
# Here, insert the token BotFather gave you for your bot.
token = 'YOUR_TOKEN_HERE'
# This is the url for communicating with your bot
url = 'https://api.telegram.org/bot%s/' % token
# We want to keep checking for updates. So this must be a never ending loop
while True:
# My chat is up and running, I need to maintain it! Get me all chat updates
get_updates = json.loads(requests.get(url + 'getUpdates').content)
# Ok, I've got 'em. Let's iterate through each one
for update in get_updates['result']:
# First make sure I haven't read this update yet
if last_update < update['update_id']:
last_update = update['update_id']
# I've got a new update. Let's see what it is.
if 'message' in update:
# It's a message! Let's send it back :D
requests.get(url + 'sendMessage', params=dict(chat_id=update['message']['chat']['id'], text=update['message']['text']))
# Let's wait a few seconds for new updates
sleep(3)
Source
Bot I'm working on
That's really interesting but definitely you'll need a server to parse the JSON value and get the chat_id out of it.
I want to get domain authority value from "moz.com" (didn't find other sources).
Sometimes page does not load properly and response from moz.com does not have proper dom elements which I parse. Probably page uses javascript to show values. It also has restriction, can not analyze more than 3 times/day (I need to visit it maximum once a day)
require 'rest-client'
require 'nokogiri'
link_url = "http://google.com"
api_url = "http://moz.com/researchtools/ose/links?site="
response = RestClient.get(api_url + link_url.split("?").first)
value = Nokogiri::HTML(response).css('.url-metrics-authority span.large').first.text.strip #previously there was Nokogiri::HTML(response).css('.metrics-authority').first.text.strip
pp value
From console that works good, but when I run it using ruby script, it fails.
Can I somehow wait for js to execute or are there any other sources to get domain authority?
You can get the Domain authority for any website/URL by making use of the free URL Metrics API provided by Moz. You will need AccessId and Secret key to consume Mozscape API's. I would suggest you to build a wrapper API to get Moz Domain Authority around the Moz API so that you can consume the wrapper API from the Javascript.
I am Russ Jones and consult for Moz. I also helped architect the latest version of Domain Authority.
The appropriate documentation for collecting Domain Authority is here
Getting an API Key is free and allows for 2,500 lookups per month at no faster than 1 every 10 seconds. Paid access starts at $250/mo and includes 120,000 rows per month with significantly fewer restrictions.
after reading the oauth documentation on box's website, I understand the steps to get access_token and refresh_token, which requires authorization_code.
step1: send Get request to https://www.box.com/api/oauth2/authorize?response_type=code&client_id=CLIENT_ID&state=authenticated&redirect_uri=https://www.appfoo.com
step2: after entering credentials of box in browser and then click the "Allow" button, redirect to the specified redirect_uri with state=authenticated&code=AUTHORIZATION_CODE
step3: now with the AUTHORIZATION_CODE in the redirect url from step2, getting access_token can be done programmatically, by sending POST request to https://www.box.com/api/oauth2/token with AUTHORIZATION_CODE, client_id, client_secret in body and then parsing the returned json response.
My question is: is it possible to programmatically do step1 and step2 instead of via browser?
thank you very much!
The current OAuth 2 flow requires the user to go through the browser and can't be done programmatically.
It is possible, just imitate every form with cURL and on second step post cookies.
First time you will need 3 requests, next time only one (if refresh_token isn't expired, otherwise 3 again)
The point about imitating the browser transactions is a good one but instead of using cURL you would want to use a higher level tool like mechanize (available for ruby, perl and python). It will handle the cookies for you and can programatically traverse forms and links. Good for page scraping and writing scripts to order hot concert tickets from TicketMaster too!
If you have the authorization code, you then should be able to get the OAuth Token(access_token, refresh_token) via SDK, correct?
In response to aIKid, this is what I first do to get a BoxClient
BoxClient client = new BoxClient(clientId, clientSecret);
Map<String,Object> authToken = new HashMap<String,Object>();
authToken.put("exprires_in","3600");
authToken.put( "token_type","bearer");
authToken.put("refresh_token", clientRefreshToken);
authToken.put("access_token",clientAccessToken);
BoxOAuthToken oauthToken = new BoxOAuthToken(authToken);
client.authenticate(oauthToken);
return client;
Then, I have this to create a new user,
BoxUser createdUser = new BoxUser();
BoxUserRequestObject createUserRequest = BoxUserRequestObject.createEnterpriseUserRequestObject("someEmail.domain.com", "test user");
createdUser = client.getUsersManager().createEnterpriseUser(createUserRequest);
Now I'm trying to figure out how to do the RUD part of my CRUD operations on users and groups.
My application is meant to speed up the retrieval of phone call information from our telephone system.
The best way to get this information is to create a new search on the telephone system's web interface and export the results to an Excel spreadsheet which my application then imports into a DataSet.
To get the export, from the login screen, the process goes as follows:
Log in
Navigate to Reports Page
Click "Extension Detail" link
Select "Extensions" CheckBox
Select the extensions (typically all the ones currently being used) from the listbox
Specify date range
Click on Export button
It's not a big job to do it manually every day, but, for reliability, it would be great if I can make my application do this automatically the first time it starts every day.
Since more than 1 person in the company is going to use this application, having a Windows Service do it would be even better.
I don't know if it'll help, but the system is Datatex Topaz Next Generation telephone management system: http://www.datatex.co.za/downloads/index.html#TNG
Can anyone give me a basic idea how to do this?
Also, can anyone post links (in comments if need be) to pages where I can learn more about how to do this?
I have done the something similar to fetch info from a website. I cannot give you a exact answer. But the idea is to send login info to the page with form values. If the site is relying on cookies, you can use this cookie aware WebClient:
public class CookieAwareWebClient : WebClient
{
private CookieContainer cookieContainer = new CookieContainer();
protected override WebRequest GetWebRequest(Uri address)
{
WebRequest request = base.GetWebRequest(address);
if (request is HttpWebRequest)
{
(request as HttpWebRequest).CookieContainer = cookieContainer;
}
return request;
}
}
You should be aware that some sites rely on a session id being passed so the first thing I did was to fetch the session id from the page:
var client = new CookieAwareWebClient();
client.Encoding = Encoding.UTF8;
var indexHtml = client.DownloadString(*index page url*);
string sessionID = fetchSessionID(indexHtml);
Then I had to log in to the page which you can do by uploading values to the page. You can see the specific form elements with "view source" but you have to know a little HTML to do so.
var values = new NameValueCollection();
values.Add("sessionid", sessionID); //Fetched session id
values.Add("brugerid", args[0]); //Username in my case
values.Add("adgangskode", args[1]); //Password in my case
values.Add("login", "Login"); //The login button
//Logging in
client.UploadValues(*url to login*, values); //If all goes perfect, I'm logged in now
And then I could download the page I needed. In your case you may use DownloadFile(...) if the file always have the same url (something like Export.aspx?From=2010-10-10&To=2010-11-11) or UploadValues(...) where you specify the values as before but saves the result.
string html = client.DownloadString(*url*);
It seems you have a lot more steps than I did. But the principle is the same. To see what values your send to the site to login etc. you can use programs such as Fiddler (windows) which can capture the activity going on. Essential you just do exactly the same thing but watch out for session id etc. which is temporary.
The best idea is really to use some native way to fetch data, but if don't got the code, database etc. you have to do it the ugly way. You may also need a HTML parser to fetch the data (ups, you don't because you export to a file). And last but not least, keep in mind that pages can change and there is great potential to fail to login, parse etc.
Please ask for if you are uncertain what is going on.
ADDITION
The CookieAwareWebClient is not my code:
http://code.google.com/p/gardens/source/browse/Montrics/Physical.MyPyramid/CookieAwareWebClient.cs?r=26
Using CookieContainer with WebClient class
I also found some relevant threads:
What's a good tool to screen-scrape with Javascript support?
http://forums.asp.net/t/1475637.aspx
With a HTTP client, you need to do the following:
Log in, using cookies or HTTP authentication
Request a page
Submit form data
This means that you need some class or component in your program that can do HTTP, cookies, authentication and forms. With this, you do the same requests a user would do.