Read API multipage PDF processing - ocr

I've read compete documentation here and here, and created dozen of examples but I still can't get if Read API process multi page PDF in parallel or in consecutive order?
If there is someone with good insight into this Azure service, please advise and share your experience with this topic.

Computer Vision Read API process multiple page PDF in parallel.

Related

What is the best way to Sort Forge/BIM360 Docs files lists?

I am currently in the process of implementing pagination, sort and search functionalities in the project files/plans/sheets views of BIM 360 Docs integration.
Since I couldn't find any best practices regarding to these features, I thought I would reach out so that I don't keep stuck reinventing the wheel.
Background:
Most of the implementation uses https://github.com/Autodesk-Forge/forge-api-dotnet-client/ SDK.
Based on what I saw it looks like there is no built-in results sorting in the Forge/BIM 360 APIs. BIM 360 Docs looks as if it sorted results on the client.
One has to cache all the results as structured data on the client in order to provide the sorting functionality. That also does not play well with any pagination approach.
Question:
Is there a way to sort results using the API, so that they come back in a predefined order, also while paginating?
According to our engineering team, the "sort" feature isn't currently supported by Forge Data Management API. Apologizing for the inconvenience caused.
I have logged a request FDM-1813[Support sorting in APIs of BIM360 integration] in our internal system to our engineering team to allocate time to evaluate the possibility. As it required some time to complete this task, please remember this request id for the future reference. You're welcome to track updates or provide additional information by quoting this request id via forge.help#autodesk.com.
However, a workaround is to fetch all data from the API, then sorting on the client side via Javascript.
Cheers,

Project with Google Drive API? Is it doable?

I've been thinking about a project I'd like to start using the Google Drive API. My idea was to make a webpage (using Laravel) to let guests download files. I'd have 3 different types of users: the guests, that would be able to download files, the logged in users, that would be able to upload files, and the admins, which would be able to do all of that plus delete files (these files would be PDFs only).
Also, the server it would run on wouldn't have a lot of hard drive space for storing the files, it would just host the page and maybe keep some of the most important files. But the thing is, I have no experience whatsoever with this API. And I would hate to go through all of this trouble just to discover that it can't be done. I've tried reading the documentation but I still don't know if this is doable, and I can't find reliable tutorials (also, I don't know what is reliable, I've never worked with it).
So, for anyone who has already done something with the API, is this doable? Will the download speeds be too slow? Will users without accounts be able to download? Also, do you know any tutorials that are reliable and do it the right way? Or is the documentation the only thing I'll find/need?
Thanks in advance.
Yes,
All three cases can be handled with google drive sdk. You need to explore API in depth. Creation and downloads are easy and upload is tricky.
I recently used google drive api in a chrome extension that uploads images directly to drive here
You can ask questions regarding api usages here.
To start with, I would suggest going through one of the given Quickstarts in Google Drive REST API Overview.
Secondly, please note of the Requirements and Best Practices that a Drive API integration must adopt.
As mentioned:
Requirements
Following an "open with" action, applications must check that the user is authorized to read/write the document to which the passed document ID refers.
Best practices
In the "create new" flow, Google Drive provides your application with an authorization code. This code should be upgraded to an access token as soon as possible before applications take other actions.
Lastly, this SO post - Good tutorial on Google Drive SDK and OAuth might also help.

Azure Portal metrics extraction for external use

Having scoured the internet for a viable solution I have come up empty handed. I have concluded however that there is no current API for extracting the raw data from the Azure Portal Application Insights and so wondered if anyone else out there has managed to achieve this?
My quandary is, I am wanting to display some of the raw data on a dashing dashboard widget based on some logic and without the basic url that gives me the JSON, I am at a loss.
Any help would be greatly received, even if it is a conclusive, it cannot be done.
Thanks in advance, Mark
Currently, the only method to extract raw data from Azure Application Insights is to export the blob files and extract it from them. There are discussions on Visual Studio forum where people are asking for something simpler to be implemented. Who knows, it may happen.

Google Web Engine Api Channel vs Node.js+Socket.io

Please, help me to choose which one to use for my university project (I want to develop a shared multiuser whiteboard).
In particular, I am interested in the performance of message exchange between users and server using Channel API and Socket.io: which one is quicker and why?
I have implemented an initial version of the whiteboard http://jvyrushelloworld.appspot.com/ by following this tutorial: http://blog.greweb.fr/2012/03/30-minutes-to-make-a-multi-user-real-time-paint-with-play-2-framework-canvas-and-websocket/ The code I used is pretty much the same, except for the side and message exchange method: I used python, Google Channel API for message exchange; the guy who wrote the tutorial used Play 2 framework and Web sockets.
As you see, the web socket tutorial version works much faster (don't know if it is my mistake or google api channel performance issue). Of course, a lot of optimization can be done to improve the performance, but I wonder if it is worth to go on using Channel API for that project or is it better to switch to socket.io?

Integrating CRM with Google maps

Just started testing Zoho Crm as a CRM solution for our company. Someone asked for a Google map on the page showing our upcoming engagements.I know Zoho provides an API that allows accessing its data from the outside, but I actually need to integrate the map on the data-entry form.If anyone could provide a pointer to any mashup with Zoho CRM (be it Google MAps, Bing Maps, or any similar web service), I would be extremely grateful.
I know this is an ancient question, but since there's no answers and this is pretty much all that came up on google when searching for Zoho CRM integration with Google Maps I'll take a stab at this anyway. I recently got a similiar request, but in this case they wanted to display the leads on a page outside of Zoho.
I created a Java servlet and JSP that runs on Google App Engine. The servlet will connect to Zoho CRM to retrieve all leads and geocode the addresses they are registered with. The client-side Javascript is then taking care of creating the markers on the map for all the addresses.
It's a bit too much code to paste here (although not that much), but you can check it out at http://code.google.com/p/zohomap/.
I put the demo up at http://zohomap.appspot.com/.
I know this is an old question, but it came up on Google Search. About three years ago, I start a similar Google Maps integration project for SugarCRM. The JJWDesign Google Maps project is up on GitHub.com. The idea came about during a marketing meeting and quickly grew out of control.
Download at:
https://github.com/jjwdesign/JJWDesign-Google-Maps
Here are some of the pitfalls that I've experienced:
Exceeding Limits of Geocoding: The Google Maps API v3 has in place a limit of 2,500 Geocoding requests per day. It is also throttled to 10 per second. So, you'll most likely need to develop something to queue these requests. I used a CRON/Scheduled Task to handle the processing trigger.
PHP Memory Limits: The design of SugarCRM creates rather large objects for each one of it's records. Using 10,000 of these objects will usually exceed the memory allowed for PHP to execute. So, special consideration may be needed in examining the best way to pull data into the map.
Always develop/test with a large data set; 10,000+ records. This way you'll be able to more easily see inefficiencies in your code; especially JavaScript. The IE Browser has been know to cause issues with MarkerClustering.
Get ready for an explosion of interest in advanced search / filtering functionality. Also, expect to develop a large section of Admin configuration. Everyone wants something slightly different.
Cheers,
Jeff