Use Arelle web service with php - xbrl

I used Arelle web service for getting fact list of xbrl files with the CURL in PHP language, but the answer of this web service has a long time(13 sec), are any suggestion to speed up web service using?
And i can't use this web service on debain with ngnix web server, can anyone to help me?
Thanks.

You can try to use 'prepopulated' Arelle cache folder. If you use documents based on the same taxonomy usually first request is slowest one (when arelle populates his taxonomy files cache)

Related

Access to Restricted data types and implement in development environment

We are developing application that needs write access to restricted data types. And looks like Google has stopped taking new request for whitelisting apps.
https://developers.google.com/fit/android/data-types#restricted_data_types
Note: Google has temporarily stopped taking new requests to write to restricted data types. We are updating our policy and process for reviewing requests and will update this documentation again when we resume.
Does anyone from Google have any idea when they will resume it?
Also: Is there a way to implement/write restricted data in development environment or debug build without whitelisting, and whitelist app before going to production?
There is no timeline yet for when this will be available.
(Source: I work on Google Fit)

Python Web Crawler with stored Web History

I'm Creating a Python Web crawler, with the ability to browse web history & parse through the information and store important information within a Database for Forensics/Academic Purposes. I understand the functionality to browse web sites but the part I'm struggling with is to be able too crawl through web history I will give a scenario:
During Forensic Investigation.
You have been given a full Forensic Image of Suspects Computer, you then locate the AppData folder for Google Chrome which stores all information about suspect including form information, credentials & web history.
How would I set up the web crawler to only search through data in the suspects web history.
I am also having issues accessing the information stored within Google Chrome User Data to try view my personal information which is stored here as a start, I am currently attempting to use DB Browser to view the files to try see my own web history however I'm not having much luck with this. Any Suggestions
For those interested in this project of mine I can update this thread as I go so you can see the progress of my web-crawler the end result will have the ability to take web-history and data from public & private websites sort important information i.e. name, address, D.O.B into a database for to be used later as a biographic dictionary.
I WILL STRESS THIS AGAIN AS THIS IS ALL FOR ACADEMIC PURPOSES IN CONTROLLED ENVIROMENT AND USED ON A TEST/FAKE ACCOUNT
Hindsight (https://github.com/obsidianforensics/hindsight) is an open source tool written in Python that can parse a ton of information from the files in /Google/Chrome/User Data/ directory.
You could look at it's source for inspiration, or just run the tool and parse its output (it can produce XLSX, JSON, or SQLite) in your crawler.

How do I host JSON files on my website?

One of my iPhone apps uses JSON data to populate it's database. However, I'd like to make that process automatic by hosting the JSON file online. How do I do this?
example of the content I want to host:
[{"Name":"John","Value":22,"Colour":"brown","City":"Auckland"}
Well you just need a hosting, upload your file and point your app to that file thats it. Search for any free hosting on google, there are plenty of them.
You can store them as files on your HTTP server or generate them from database with any of server-side languages.

ASP.NET and Google Drive DLL

I plan to create a package which contains a Google Drive integration. Our customers can install this package on whose IIS (asp.net) to use it in the web-application. Client-ID und secret should be allways the same(our data). We have also running multiple web sites with this packes. The redirect-url is allways differen(subdomain/domain/directory of customers/our sites.
Which API-Type do I need?
With Client ID for web applications, I have to note every URL of customers and our sites.
Is there an example with asp.net (without VMC but propably vb.net) and the Google Drive API (DLL)?
I've seen a lot of notes but in the most cases, the URL etc. are created manualy.
Best regards
Christoph
Make sure that client ID, client secret secret and redirect URI are configurable with a config file and for each deployment provide a different file.

Chrome Extension: log in in options page

i'm trying to make a chrome extension and I need the have an options page where the user is able to login. this way the extension knows who he is and can retrieve specific data from a server.
I'm not sure how to do this. Any one has a tutorial page or, better yet, a sample code?
thank you all!
A number of extensions use OAuth or OpenID for authentication. There is a tutorial for OAuth on code.google.com. You could also just use a username/password and make a XHR request to validate them. It really depends on the site and what authentication methods it offers.
You need an authentication engine... In other words you need to have a backend for your chrome extension which handles logging in and logging out. This is extremely easy to do. You can use a backend as a service if it's a small project (look at Parse or Firebase) or write your own backend using a framework like Ruby on Rails.
http://rubyonrails.org/; https://www.parse.com/; https://www.firebase.com/
Options pages are well documented on the official docs, including sample code.