Trying to get Google Drive to work with PCL Xamarin Forms application - google-drive-api

I’m using Xamarin Forms to do some cross platform applications and I’d like to offer DropBox and GoogleDrive as places where users can do backups, cross platform data sharing and the like. I was able to get DropBox working without doing platform specific shenanagins just fine, but Google Drive is really giving me fits. I have my app setup properly with Google and have tested it with a regular CLI .NET application using their examples that read the JSON file off the drive and create a temporary credentials file – all fine and well but getting that to fly without access to the file system is proving elusive and I can’t find any examples on how to go about it.
I’m currently just using Auth0 as a gateway to allow users to provide creds/access to my app for their account which works dandy, the proper scope items are requested (I’m just using read only file access for testing) – I get an bearer token and refresh token from them – however when trying to actually use that data and just do a simple file listing, I get a 400 bad request error.
I’m sure this must be possible but I can’t find any examples anywhere that deviate from the slightest of using the JSON file downloaded from Google and creating a credentials file – surely you can create an instance of the DriveService object armed with only the bearer token...
Anyway – here’s a chunk of test code I’m trying to get the driveService object configured – if anyone has done this or has suggestions as to what to try here I’d very much appreciate your thoughts.
public bool AuthenticationTest(string pBearerToken)
{
try
{
var oInit = new BaseClientService.Initializer
{
ApplicationName = "MyApp",
ApiKey = pBearerToken,
};
_googleDrive = new DriveService(oInit);
FilesResource.ListRequest listRequest = _googleDrive.Files.List();
listRequest.PageSize = 10;
listRequest.Fields = "nextPageToken, files(id, name)";
//All is well till this call to list the files…
IList<Google.Apis.Drive.v3.Data.File> files = listRequest.Execute().Files;
foreach (var file in files)
{
Debug. WriteLine(file.Name);
}
}
catch (Exception ex)
{
RaiseError(ex);
}
}

Related

How to authenticate with Blockfrost.io API?

So I'm trying to import Cardano Blockchain data like address balance, amount staked, rewards etc into a Google Sheet. I found this project named Blockfrost.io which is an API for accessing Cardano blockchain info and import it into apps etc.
I think I can use this with Google Sheets. Problem is I don't know how to authenticate. I've searched all around on the documentation and it's not clear to me. It seems it's possible if your're building an app or using the terminal.
But I just want to authenticate in the easiest way possible like in the browser address bar that way it would be simple to get the JSON with the info I need and import the info to Google Sheets.
This is where it mentions the Authentication:
https://docs.blockfrost.io/#section/Authentication
I already have an API key to access. But how do I authenticate?
So if I want to check the blockchain metrics (mainnet1234567890 is a dummy key, I won't use mine here):
https://cardano-mainnet.blockfrost.io/api/v0/metrics/project_id:mainnet1234567890
The JSON will still output this:
status_code 403
error "Forbidden"
message "Missing project token. Please include project_id in your request."
Is there a correct way to authenticate on the browser address bar?
It's not clear which BlockFrost API you are using Go JavaScript etc...
the API key goes in as a header on the request object. I was manually trying to connect to the service and found for a request is what I had to do in C#...
var aWR = System.Net.WebRequest.Create(url);
aWR.Method = "GET";
aWR.Headers.Add("project_id", "mainnetTheRestOfMyKeyIsHidden");
var webResponse = aWR.GetResponse();
var webStream = webResponse.GetResponseStream();
var reader = new StreamReader(webStream);
var data = reader.ReadToEnd();
Later I realized I wanted to use their API cause they implement the rate limiter, something I would rather use than build... I use the following with the BlockFrost API in c#
const string apiKey = "mainnetPutYourKeyHere";
const string network = "mainnet";
// your key is set during the construction of the provider.
ServiceProvider provider = new ServiceCollection().AddBlockfrost(network, apiKey).BuildServiceProvider();
// from there individual services are created
var AddressService = provider.GetRequiredService<IAddressesService>();
// The call to get the data looked like
AddressTransactionsContentResponseCollection TXR = await AddressService.GetTransactionsAsync(sAddress, sHeightFrom, sHeightTo, 100, iAddressPage, ESortOrder.Desc, new System.Threading.CancellationToken());
// etc. your gonna need to set the bounds above in terms of block height
Try using postman and include the "project_id" header with api key as the value like this - it will clear up the concept for you I think:enter image description here

Upload Revit model files to BIM360 via API and keep them linked

I have two Revit model files, A and B, where B is linked into A. I want to upload the files to BIM360 Docs via the Autodesk.Forge API and keep them linked, so I can see the combined model in the Forge Model viewer when I subsequently view model A.
I have the two files in a zip file, but from what I understand, I shouldn't upload the zip file, but rather upload A and B separately, then create a relationship between them.
I can upload the files without problems, and I've then tried to link them via this code (using the NON-encoded version ids for A and B):
public async Task SetLinkedFileRelationship(string projectId, string versionId, string linkedVersionId)
{
BaseAttributesExtensionObject baseAttribute = new BaseAttributesExtensionObject("auxiliary:autodesk.core:Attachment", "1.0");
CreateRefDataMeta meta = new CreateRefDataMeta(baseAttribute);
CreateRefData createRefData = new CreateRefData(CreateRefData.TypeEnum.Versions, linkedVersionId, meta);
CreateRef createRef = new CreateRef(new JsonApiVersionJsonapi(JsonApiVersionJsonapi.VersionEnum._0), createRefData);
VersionsApi versionsApi = new VersionsApi { Configuration = { AccessToken = _token.AccessToken } };
await versionsApi.PostVersionRelationshipsRefAsync(projectId, versionId, createRef);
}
...which produces this response:
status: 400
code: FUNCTION_NOT_SUPPORTED
detail: BIM360 currently does not support the creation of refs.
So apparently I can't create the link between A and B like this. Is there another way to accomplish what I want, or is this currently just not possible in BIM360? I know you can do it via the BIM360 Docs web page (using the Upload file -> Linked Files button), but is it possible when I upload the model files via the API? If so, what is the recipe?
Please keep in mind that my question is for uploading to BIM360 Docs - using the Autodesk.Forge API (v2). I'm aware of this post: BIM360 Docs: Setting up external references between files (Upload Linked Files), but that is targeted at manually composing requests. I'd like to be able to use the v2 API.
I believe this post should help https://forge.autodesk.com/blog/bim360-docs-setting-external-references-between-files-upload-linked-files.

Accessing ArcGIS Pro geoprocessing history programmatically

I am writing an ArcGIS Pro Add-In and would like to view items in the geoprocessing history programmatically. The goal of this would be to get the list of parameters and tools used, to be able to better understand and recreate a workflow later, and perhaps, in another project where we would not have direct access to the history within ArcGIS Pro.
After a lot of searching through documentation, online posts, and debugging breakpoints in my code, I've found that some of this data does exist privately within the HistoryProjectItem class, but since this is a private class member, within a sealed class it seems that there would be nothing I can do to access this data. The other place I've seen this data is less than ideal, with the user having an option to write the geoprocessing history to an XML log file that lives within /AppData/Roaming/ESRI/ArcGISPro/ArcToolbox/History. Our team has been told that this file may be a problem because certain recursive operations may cause the file to balloon out of control, and after reading online, it seems that most people want this setting disabled to avoid large log files taking up space on their machine. Overall the log file doesn't seem like a great option as we fear it could slow down a user by having the program write large log files while they are working.
I was wondering if this data is stored somewhere that I have missed that could be accessed programmatically from the add-in. It seems to me that the data within Project.Items is always stored regardless of user settings but appears to be inaccessible this way to due class member visibility. I'm unfamiliar with geodatabases and ArcGIS file formats to know if a project will always have a .gdb which perhaps we could read the history from there.
Any insights on how to better read the Geoprocessing history in a minimally intrusive way to the user would be ideal. Is this data available elsewhere?
This was the closest/best solution I have found so far without writing to the history logs that most people avoid due to filesize bloat, and warnings that one operation may run other operations recursively causing the file to balloon massively.
https://community.esri.com/t5/arcgis-pro-sdk-questions/can-you-access-geoprocessing-history-programmatically-using-the/m-p/1007833#M5842
it involves reading the .arpx file (which is written to on save) by unzipping it, parsing the XML, and filtering the contents to only GPHistoryOperations. From there I was able to read all the parameters, environment options, status, and duration of the operation that I was hoping to gain.
public static void ListHistory()
{
// this can be run in a console app (or within a Pro add-in)
CIMGISProject project = GetProject(#"D:\tests\topologies\topotest1.aprx");
foreach(CIMProjectItem hist in project.ProjectItems
.Where(itm => itm.ItemType == "GPHistory"))
{
Debug.Print($"+++++++++++++++++++++++++++");
Debug.Print($"{hist.Name}");
XmlDocument doc = new XmlDocument();
doc.LoadXml(hist.PropertiesXML);
//it sure would be nice if Pro SDK had things like MdProcess class in ArcObjects
//https://desktop.arcgis.com/en/arcobjects/latest/net/webframe.htm#MdProcess.htm
var json = JsonConvert.SerializeXmlNode(doc, Newtonsoft.Json.Formatting.Indented);
Debug.Print(json);
}
}
static CIMGISProject GetProject(string aprxPath)
{
//aprx files are actually zip files
//https://www.nuget.org/packages/SharpZipLib
using (var zipFile = new ZipFile(aprxPath))
{
var entry = zipFile.GetEntry("GISProject.xml");
using (var stream = zipFile.GetInputStream(entry))
{
using (StreamReader reader = new StreamReader(stream))
{
var xml = reader.ReadToEnd();
//deserialize the xml from the aprx file to hydrate a CIMGISProject
return ArcGIS.Core.CIM.CIMGISProject.FromXml(xml);
}
};
};
}
Code provided by Kirk Kuykendall

Web API call not returning

I have a RESTful Web API that is running properly as I can test it with Fiddler. I see calls going through, I see responses coming back.
I am developing a tablet application that needs to use the Web API in order to fetch data or make updates in the repository.
My calls do not return and there is not a single trace in the Fiddler to show that my calls even reach the server.
The first call I need to make is to login. The URI would be this:
http://localhost:53060/api/user
This call would normally return some information about the user (such as group membership, level of authorization and so on). The Web API uses Windows Authentication, so the repository is able to resolve all these fields based on the credentials passed in. As I said, in Fiddler I see the three calls made to the URI as the authentication is negotiated between the caller and the server. The third call returns with a JSON object that contains all information generated from the repository as expected.
Now, moving to my client I have the following:
var webApiClient = new HttpClient(new HttpClientHandler()
{
UseDefaultCredentials = true
})
{
BaseAddress = new Uri("http://localhost:53060/")
};
webApiClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
HttpResponseMessage response = await webApiClient.GetAsync("api/user");
var userLoginInfo = await response.Content.ReadAsAsync<UserLoginInformation>();
My call to "GetAsync" never returns and, like I said, I see no trace of it in Fiddler.
Any idea of what I'm doing wrong?
Changing the URL where the Web API was exposed seemed to have fixed the problem. Thanks to #Nkosi for the suggestion.
For anyone stumbling onto this question and asking themselves how to change the URL of the Web API, there are two ways. If the simulator is running on the same machine with the Web API, the change has to be made in the "applicationhost.config" file for IIS Express. You can locate this file by right-clicking on the IIS Express icon in the Notification Area (the bottom right corner) and selecting show all websites. Highlight the desired Web API and it will show where the application host configuration file is located. In there, one needs to locate the following section:
<bindings>
<binding protocol="http" bindingInformation="*:53060:localhost" />
</bindings>
and replace the "localhost" name with the IP address of the machine where the Web API is running.
However, this approach will not work once you start testing your tablet app with a real device. IIS Express must be coerced into exposing the Web API to the outside world. I found an excellent node.js package that can help with that. It is called IISExpress-proxy.

Google Drive/OAuth - Can't figure out how to get re-usable GoogleCredentials

I've successfully installed and run the Google Drive Quick Start application called DriveCommandLine. I've also adapted it a little to GET file info for one of the files in my Drive account.
What I would like to do now is save the credentials somehow and re-use them without the user having to visit a web page each time to get an authorization code. I have checked out this page with instructions to Retrieve and Use OAuth 2.0 credentials. In order to use the example class (MyClass), I have modified the line in DriveCommandLine where the Credential object is instantiated:
Credential credential = MyClass.getCredentials(code, "");
This results in the following exception being thrown:
java.lang.NullPointerException
at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)
at com.google.api.client.json.jackson.JacksonFactory.createJsonParser(JacksonFactory.java:84)
at com.google.api.client.json.JsonFactory.fromInputStream(JsonFactory.java:247)
at com.google.api.client.googleapis.auth.oauth2.GoogleClientSecrets.load(GoogleClientSecrets.java:168)
at googledrive.MyClass.getFlow(MyClass.java:145)
at googledrive.MyClass.exchangeCode(MyClass.java:166)
at googledrive.MyClass.getCredentials(MyClass.java:239)
at googledrive.DriveCommandLine.<init>(DriveCommandLine.java:56)
at googledrive.DriveCommandLine.main(DriveCommandLine.java:115)
I've been looking at these APIs (Google Drive and OAuth) for 2 days now and have made very little progress. I'd really appreciate some help with the above error and the problem of getting persistent credentials in general.
This whole structure seems unnecessarily complicated to me. Anybody care to explain why I can't just create a simple Credential object by passing in my Google username and password?
Thanks,
Brian O Carroll, Dublin, Ireland
* Update *
Ok, I've just gotten around the above error and now I have a new one.
The way I got around the first problem was by modifying MyClass.getFlow(). Instead of creating a GoogleClientServices object from a json file, I have used a different version of GoogleAuthorizationCodeFlow.Builder that allows you to enter the client ID and client secret directly as Strings:
flow = new GoogleAuthorizationCodeFlow.Builder(httpTransport, jsonFactory, "<MY CLIENT ID>", "<MY CLIENT SECRET>", SCOPES).setAccessType("offline").setApprovalPrompt("force").build();
The problem I have now is that I get the following error when I try to use flow (GoogleAuthorizationCodeFlow object) to exchange the authorization code for the Credentials object:
An error occurred: com.google.api.client.auth.oauth2.TokenResponseException: 400 Bad Request
{
"error" : "invalid_scope"
}
googledrive.MyClass$CodeExchangeException
at googledrive.MyClass.exchangeCode(MyClass.java:185)
at googledrive.MyClass.getCredentials(MyClass.java:262)
at googledrive.DriveCommandLine.<init>(DriveCommandLine.java:56)
at googledrive.DriveCommandLine.main(DriveCommandLine.java:115)
Is there some other scope I should be using for this? I am currently using the array of scopes provided with MyClass:
private static final List<String> SCOPES = Arrays.asList(
"https://www.googleapis.com/auth/drive.file",
"https://www.googleapis.com/auth/userinfo.email",
"https://www.googleapis.com/auth/userinfo.profile");
Thanks!
I feel your pain. I'm two months in and still getting surprised.
Some of my learnings...
When you request user permissions, specify "offline=true". This will ("sometimes" sic) return a refreshtoken, which is as good as a password with restricted permissions. You can store this and reuse it at any time (until the user revokes it) to fetch an access token.
My feeling is that the Google SDKs are more of a hinderence than a help. One by one, I've stopped using them and now call the REST API directly.
On your last point, you can (just) use the Google clientlogin protocol to access the previous generation of APIs. However this is totally deprecated and will shortly be turned off. OAuth is designed to give fine grained control of authorisation which is intrinsically complex. So although I agree it's complicated, I don't think it's unnecessarily so. We live in a complicated world :-)
Your and mine experiences show that the development community is still in need of a consolidated document and recipes to get this stuff into our rear-view mirrors so we can focus on the task at hand.
Oath2Scopes is imported as follows:
import com.google.api.services.oauth2.Oauth2Scopes;
You need to have the jar file 'google-api-services-oauth2-v2-rev15-1.8.0-beta.jar' in your class path to access that package. It can be downloaded here.
No, I don't know how to get Credentials without having to visit the authorization URL at least once and copy the code. I've modified MyClass to store and retrieve credentials from a database (in my case, it's a simple table that contains userid, accesstoken and refreshtoken). This way I only have to get the authorization code once and once I get the access/refresh tokens, I can reuse them to make a GoogleCredential object. Here's how Imake the GoogleCredential object:
GoogleCredential credential = new GoogleCredential.Builder().setJsonFactory(jsonFactory)
.setTransport(httpTransport).setClientSecrets(clientid, clientsecret).build();
credential.setAccessToken(accessToken);
credential.setRefreshToken(refreshToken);
Just enter your clientid, clientsecret, accessToken and refreshToken above.
I don't really have a whole lot of time to separate and tidy up my entire code to post it up here but if you're still having problems, let me know and I'll see what I can do. Although, you are effectively asking a blind man for directions. My understanding of this whole system is very sketchy!
Cheers,
Brian
Ok, I've finally solved the second problem above and I'm finally getting a working GoogleCredential object with an access token and a refresh token.
I kept trying to solve the scopes problem by modifying the list of scopes in MyClass (the one that manages credentials). In the end I needed to adjust the scopes in my modified version of DriveCommandLine (the one that's originally used to get an authorization code). I added 2 scopes from Oauth2Scopes:
GoogleAuthorizationCodeFlow flow = new GoogleAuthorizationCodeFlow.Builder(
httpTransport, jsonFactory, CLIENT_ID, CLIENT_SECRET,
Arrays.asList(DriveScopes.DRIVE, Oauth2Scopes.USERINFO_EMAIL, Oauth2Scopes.USERINFO_PROFILE))
.setAccessType("offline").setApprovalPrompt("force").build();
Adding the scopes for user information allowed me to get the userid later in MyClass. I can now use the userid to store the credentials in a database for re-use (without having to get the user to go to a URL each time). I also set the access type to "offline" as suggested by pinoyyid.