Read user directory in HTML 5 and load images in it - html

I've been toying around with the FileSystem and File API, in Chrome, to try to implement a transient "instant gallery". The user chooses a directory, and all the images in it are then displayed in the webpage.
But I'm having a hard time, it seems Chrome requires some extra launching arguments to allow file access, FileSystem and File API are not W3C and not portable, I cannot instantiate certain objects...
I cannot even get the directory absolute path to open files in it (though maybe I don't need the absolute path, but I feel like it lacks a good documentation).
Anyway, I wanted to know how to implement this? Is there another API? A simpler way? Do I absolutely need to use FileSystem and File, and set Chrome's arguments?

In order to read the files in the directory you will need to create a DirectoryReader object, and use the readEntries() method to read the content of the directory:
fs.root.getDirectory('Documents', {}, function(dirEntry){<br>
var dirReader = dirEntry.createReader();
dirReader.readEntries(function(entries) {<br>
for(var i = 0; i < entries.length; i++) {
var entry = entries[i];
if (entry.isDirectory){
console.log('Directory: ' + entry.fullPath);
}
else if (entry.isFile){
console.log('File: ' + entry.fullPath);
}
}
}, errorHandler);
}, errorHandler);
Please take a look here: http://code.tutsplus.com/tutorials/toying-with-the-html5-filesystem-api--net-24719
But I think that Chrome will not be able to access an entire directory that the user has selected from his computer, only if the user has selected multiple files in an input field. If that is ok and suits your needs there is a good tutorial here:
http://www.html5rocks.com/en/tutorials/file/dndfiles/

Related

What's the lightest way to add the smallest amount of dynamics to a static HTML site?

I have a personal website that's all static html.
It works perfectly for my needs, except for one tiny thing.
I want to dynamically change a single word on a single page: the name of the current map for a game server I'm running.
I can easily run a cron job to dump the name of the map into a file in the site's html directory, call it mapname.txt. This file contains a single line of text, the name of the map.
How would I update, say, game.html to include this map name?
I would very strongly prefer to not pull in some massive framework, or something like php or javascript to accomplish this.
I want the lightest weight solution possible. Using sed is an option, although definitely a hacky one. What's the tiniest step up from static html?
If you say "dynamically", do you mean:
If the information changes ...
A) the user should see it after they have re-loaded the page?
B) the page should update without the need to reload?
For A, you can use PHP (or any other language your server supports) to read the data from the file and print it into the web page. This will happen on server side.
For B, you can use JS that queries the file and updates the HTML. This will happen on client side.
To change text there are a few way though only two appropriate methods.
First is using textContent:
document.getElementById('example').textContent = 'some example text';
Secondly is the older nodeValue however it's a bit more tricky since you have to specify the exact textNode (e.g. .firstChild):
document.getElementById('example').firstChild.nodevalue = 'some example text';
You're 100% on the mark about not using frameworks or libraries, almost everything exists without the suck.
I'm not going to test this though this is a very stripped down version of my ajax function from my web platform. Some people might scream about the Fetch API however the syntax is an absolute mess. I recommend figuring out how to standardize this function so you can use it for everything instead of making copies of the code for every instance. Mine supports both GET and POST requests.
function ajax(method, url, param_id_container_pos, id_container)
{
var xhr = new XMLHttpRequest();
xhr.withCredentials = true;
xhr.timeout = 8000;
xhr.open(method,url,true);
xhr.send(null);
xhr.onreadystatechange = function()
{
if (xhr.readyState == 4)
{
if (xhr.getResponseHeader('content-type'))
{
var type = xhr.getResponseHeader('content-type').split('/')[1];
if (type.indexOf(';') >- 1) {type = type.split(';')[0];}
}
else {var type = 'xml';}//Best guess for now.
console.log(type,xhr);
console.log(xhr.responseText);
//console.log(type,xhr.responseXML);
//document.getElementById('example').textContent = xhr.responseText;
}
}
}
You're also going to have to ensure that the url is set to an absolute path. I use path variable in my platform (see my profile for the link, wildly clean and organized code).
There are plenty of ways to make this function reusable and I highly recommend doing that. For now use the last non-curley-bracket line to update your line of text.

Accessing ArcGIS Pro geoprocessing history programmatically

I am writing an ArcGIS Pro Add-In and would like to view items in the geoprocessing history programmatically. The goal of this would be to get the list of parameters and tools used, to be able to better understand and recreate a workflow later, and perhaps, in another project where we would not have direct access to the history within ArcGIS Pro.
After a lot of searching through documentation, online posts, and debugging breakpoints in my code, I've found that some of this data does exist privately within the HistoryProjectItem class, but since this is a private class member, within a sealed class it seems that there would be nothing I can do to access this data. The other place I've seen this data is less than ideal, with the user having an option to write the geoprocessing history to an XML log file that lives within /AppData/Roaming/ESRI/ArcGISPro/ArcToolbox/History. Our team has been told that this file may be a problem because certain recursive operations may cause the file to balloon out of control, and after reading online, it seems that most people want this setting disabled to avoid large log files taking up space on their machine. Overall the log file doesn't seem like a great option as we fear it could slow down a user by having the program write large log files while they are working.
I was wondering if this data is stored somewhere that I have missed that could be accessed programmatically from the add-in. It seems to me that the data within Project.Items is always stored regardless of user settings but appears to be inaccessible this way to due class member visibility. I'm unfamiliar with geodatabases and ArcGIS file formats to know if a project will always have a .gdb which perhaps we could read the history from there.
Any insights on how to better read the Geoprocessing history in a minimally intrusive way to the user would be ideal. Is this data available elsewhere?
This was the closest/best solution I have found so far without writing to the history logs that most people avoid due to filesize bloat, and warnings that one operation may run other operations recursively causing the file to balloon massively.
https://community.esri.com/t5/arcgis-pro-sdk-questions/can-you-access-geoprocessing-history-programmatically-using-the/m-p/1007833#M5842
it involves reading the .arpx file (which is written to on save) by unzipping it, parsing the XML, and filtering the contents to only GPHistoryOperations. From there I was able to read all the parameters, environment options, status, and duration of the operation that I was hoping to gain.
public static void ListHistory()
{
// this can be run in a console app (or within a Pro add-in)
CIMGISProject project = GetProject(#"D:\tests\topologies\topotest1.aprx");
foreach(CIMProjectItem hist in project.ProjectItems
.Where(itm => itm.ItemType == "GPHistory"))
{
Debug.Print($"+++++++++++++++++++++++++++");
Debug.Print($"{hist.Name}");
XmlDocument doc = new XmlDocument();
doc.LoadXml(hist.PropertiesXML);
//it sure would be nice if Pro SDK had things like MdProcess class in ArcObjects
//https://desktop.arcgis.com/en/arcobjects/latest/net/webframe.htm#MdProcess.htm
var json = JsonConvert.SerializeXmlNode(doc, Newtonsoft.Json.Formatting.Indented);
Debug.Print(json);
}
}
static CIMGISProject GetProject(string aprxPath)
{
//aprx files are actually zip files
//https://www.nuget.org/packages/SharpZipLib
using (var zipFile = new ZipFile(aprxPath))
{
var entry = zipFile.GetEntry("GISProject.xml");
using (var stream = zipFile.GetInputStream(entry))
{
using (StreamReader reader = new StreamReader(stream))
{
var xml = reader.ReadToEnd();
//deserialize the xml from the aprx file to hydrate a CIMGISProject
return ArcGIS.Core.CIM.CIMGISProject.FromXml(xml);
}
};
};
}
Code provided by Kirk Kuykendall

Importing local json file using d3.json does not work

I try to import a local .json-file using d3.json().
The file filename.json is stored in the same folder as my html file.
Yet the (json)-parameter is null.
d3.json("filename.json", function(json) {
root = json;
root.x0 = h / 2;
root.y0 = 0;});
. . .
}
My code is basically the same as in this d3.js example
If you're running in a browser, you cannot load local files.
But it's fairly easy to run a dev server, on the commandline, simply cd into the directory with your files, then:
python -m SimpleHTTPServer
(or python -m http.server using python 3)
Now in your browser, go to localhost:3000 (or :8000 or whatever is shown on the commandline).
The following used to work in older versions of d3:
var json = {"my": "json"};
d3.json(json, function(json) {
root = json;
root.x0 = h / 2;
root.y0 = 0;
});
In version d3.v5, you should do it as
d3.json("file.json").then(function(data){ console.log(data)});
Similarly, with csv and other file formats.
You can find more details at https://github.com/d3/d3/blob/master/CHANGES.md
Adding to the previous answers it's simpler to use an HTTP server provided by most Linux/ Mac machines (just by having python installed).
Run the following command in the root of your project
python -m SimpleHTTPServer
Then instead of accessing file://.....index.html open your browser on http://localhost:8080 or the port provided by running the server. This way will make the browser fetch all the files in your project without being blocked.
http://bl.ocks.org/eyaler/10586116
Refer to this code, this is reading from a file and creating a graph.
I also had the same problem, but later I figured out that the problem was in the json file I was using(an extra comma). If you are getting null here try printing the error you are getting, like this may be.
d3.json("filename.json", function(error, graph) {
alert(error)
})
This is working in firefox, in chrome somehow its not printing the error.
Loading a local csv or json file with (d3)js is not safe to do. They prevent you from doing it. There are some solutions to get it working though. The following line basically does not work (csv or json) because it is a local import:
d3.csv("path_to_your_csv", function(data) {console.log(data) });
Solution 1:
Disable the security in your browser
Different browsers have different security setting that you can disable. This solution can work and you can load your files. Disabling is however not advisable. It will make you vulnerable for all kind of threads. On the other hand, who is going to use your software if you tell them to manually disable the security?
Disable the security in Chrome:
--disable-web-security
--allow-file-access-from-files
Solution 2:
Load your csv/json file from a website.
This may seem like a weird solution but it will work. It is an easy fix but can be unpractical though. See here for an example. Check out the page-source. This is the idea:
d3.csv("https://path_to_your_csv", function(data) {console.log(data) });
Solution 3:
Start you own browser, with e.g. Python.
Such a browser does not include all kind of security checks. This may be a solution when you experiment with your code on your own machine. In many cases, this may not be the solution when you have users. This example will serve HTTP on port 8888 unless it is already taken:
python -m http.server 8888
python -m SimpleHTTPServer 8888 &
Open the (Chrome) browser address bar and type the underneath. This will open the index.html. In case you have a different name, type the path to that local HTML page.
localhost:8888
Solution 4:
Use local-host and CORS
You may can use local-host and CORS but the approach is not user-friendly coz setting up this, may not be so straightforward.
Solution 5:
Embed your data in the HTML file
I like this solution the most. Instead of loading your csv, you can write a script that embeds your data directly in the html. This will allow users use their favorite browser, and there are no security issues. This solution may not be so elegant because your html file can grow very hard depending on your data but it will work though. See here for an example. Check out the page-source.
Remove this line:
d3.csv("path_to_your_csv", function(data) { })
Replace with this:
var data =
[
$DATA_COMES_HERE$
]
You can't readily read local files, at least not in Chrome, and possibly not in other browsers either.
The simplest workaround is to simply include your JSON data in your script file and then simply get rid of your d3.json call and keep the code in the callback you pass to it.
Your code would then look like this:
json = { ... };
root = json;
root.x0 = h / 2;
root.y0 = 0;
...
I have used this
d3.json("graph.json", function(error, xyz) {
if (error) throw error;
// the rest of my d3 graph code here
}
so you can refer to your json file by using the variable xyz and graph is the name of my local json file
Use resource as local variable
var filename = {x0:0,y0:0};
//you can change different name for the function than json
d3.json = (x,cb)=>cb.call(null,x);
d3.json(filename, function(json) {
root = json;
root.x0 = h / 2;
root.y0 = 0;});
//...
}

Detecting folders/directories in javascript FileList objects

I have recently contributed some code to Moodle which uses some of the capabilities of HTML5 to allow files to be uploaded in forms via drag and drop from the desktop (the core part of the code is here: https://github.com/moodle/moodle/blob/master/lib/form/dndupload.js for reference).
This is working well, except for when a user drags a folder / directory instead of a real file. Garbage is then uploaded to the server, but with the filename matching the folder.
What I am looking for is an easy and reliable way to detect the presence of a folder in the FileList object, so I can skip it (and probably return a friendly error message as well).
I've looked through the documentation on MDN, as well as a more general web search, but not turned up anything. I've also looked through the data in the Chrome developer tools and it appears that the 'type' of the File object is consistently set to "" for folders. However, I'm not quite convinced this is the most reliable, cross-browser detection method.
Does anyone have any better suggestions?
You cannot rely on file.type. A file without an extension will have a type of "". Save a text file with a .jpg extension and load it into a file control, and its type will display as image/jpeg. And, a folder named "someFolder.jpg" will also have its type as image/jpeg.
Instead, try to read the first byte of the file. If you are able to read the first byte, you have a file. If an error is thrown, you probably have a directory:
try {
await file.slice(0, 1).arrayBuffer();
// it's a file!
}
catch (err) {
// it's a directory!
}
If you are in the unfortunate position of supporting IE11, The file will not have the arrayBuffer method. You have to resort to the FileReader object:
// use this code if you support IE11
var reader = new FileReader();
reader.onload = function (e) {
// it's a file!
};
reader.onerror = function (e) {
// it's a directory!
};
reader.readAsArrayBuffer(file.slice(0, 1));
I also ran into this problem and below is my solution. Basically, I took have a two pronged approach:
(1) check whether the File object's size is large, and consider it to be a genuine file if it is over 1MB (I'm assuming folders themselves are never that large).
(2) If the File object is smaller than 1MB, then I read it using FileReader's 'readAsArrayBuffer' method. Successful reads call 'onload' and I believe this indicates the file object is a genuine file. Failed reads call 'onerror' and I consider it a directory. Here is the code:
var isLikelyFile = null;
if (f.size > 1048576){ isLikelyFile = false; }
else{
var reader = new FileReader();
reader.onload = function (result) { isLikelyFile = true; };
reader.onerror = function(){ isLikelyFile = false; };
reader.readAsArrayBuffer(f);
}
//wait for reader to finish : should be quick as file size is < 1MB ;-)
var interval = setInterval(function() {
if (isLikelyFile != null){
clearInterval(interval);
console.log('finished checking File object. isLikelyFile = ' + isLikelyFile);
}
}, 100);
I tested this in FF 26, Chrome 31, and Safari 6 and three browsers call 'onerror' when attempting to read directories. Let me know if anyone can think of a use case where this fails.
I proposing calling FileReader.readAsBinaryString on the File object. In Firefox, this will raise an Exception when the File is a Directory. I only do this if the File meets the conditions proposed by gilly3.
Please see my blog post at http://hs2n.wordpress.com/2012/08/13/detecting-folders-in-html-drop-area/ for more details.
Also, version 21 of Google Chrome now supports dropping folders. You can easily check if the dropped items are folders, and also read their contents.
Unfortunately, I don´t have any (client-side) solution for older Chrome versions.
One other note is that type is "" for any file that has an unknown extension. Try uploading a file named test.blah and the type will be empty. AND... try dragging and dropping a folder named test.jpg - type will be set to "image/jpeg". To be 100% correct, you can't depend on type solely (or if at all, really).
In my testing, folders have always been of size 0 (on FF and Chrome on 64-bit Windows 7 and under Linux Mint (Ubuntu essentially). So, my folder check is just checking if size is 0 and it seems to work for me in our environment. We also don't want 0-byte files uploaded either so if it's 0 byte the message comes back as "Skipped - 0 bytes (or folder)"
FYI, this post will tell you how to use dataTransfer API in Chrome to detect file type: http://updates.html5rocks.com/2012/07/Drag-and-drop-a-folder-onto-Chrome-now-available
The best option is to use both the 'progress' and 'load' events on a FileReader instance.
var fr = new FileReader();
var type = '';
// Early terminate reading files.
fr.addEventListener('progress', function(e) {
console.log('progress - valid file');
fr.abort();
type = 'file';
});
// The whole file loads before a progress event happens.
fr.addEventListener('load', function(e) {
console.log('load - valid file');
type = 'file';
});
// Not a file. Possibly a directory.
fr.addEventListener('error', function(e) {
console.log('error - not a file or is not readable by the web browser');
});
fr.readAsArrayBuffer(thefile);
This fires the error handler when presented with a directory and most files will fire the progress handler after reading just a few KB. I've seen both events fire. Triggering abort() in the progress handler stops the FileReader from reading more data off disk into RAM. That allows for really large files to be dropped without reading all of the data of such files into RAM just to determine that they are files.
It may be tempting to say that if an error happens that the File is a directory. However, a number of scenarios exist where the File is unreadable by the web browser. It is safest to just report the error to the user and ignore the item.
An easy method is the following:
Check if the file's type is an empty string: type === ""
Check if the file's size is 0, 4096, or a multiple of it: size % 4096 === 0.
if (file.type === "" && file.size % 4096 === 0) {
// The file is a folder
} else {
// The file is not a folder
}
Note: Just by chance, there could be files without a file extension that have the size of some multiple of 4096. Even though this will not happen very often, be aware of it.
For reference, please see the great answer from user Marco Bonelli to a similar topic. This is just a short summary of it.

How to put\save files into your application directory? (adobe air)

How to put\save files into your application directory? (adobe air) (code example, please)
It's not recomended but it is possible. Construct your File reference like this:
var pathToFile:String = File.applicationDirectory.resolvePath('file.txt').nativePath;
var someFile:File = new File(pathToFile);
You can't write to your AIR app's Application Directory, it's not allowed. You can however write to a folder that your AIR app creates in the user's directory, called the Application Storage Directory. If you need config files and the like, that's probably the best place to put them. See 'applicationDirectory' in the docs link below:
http://www.adobe.com/livedocs/flash/9.0/ActionScriptLangRefV3/
#glendon
if you try to save directly to applicationDirectory it will indeed throw an error, but it seems you can move the file in the filesystem. i used the code below after yours:
var sourceFile:File = File.applicationStorageDirectory.resolvePath ("file.txt");
var pathToFile:String = File.applicationDirectory.resolvePath ('file.txt').nativePath;
var destination:File = new File (pathToFile);
sourceFile.moveTo (destination, true);
the reason why you 'shouldnt' use the application folder is because not all users have rights to save files in such folder, while everyone will in applicationStorageDirectory.
The accepted answer works!
But if I do this instead:
var vFile = File.applicationDirectory.resolvePath('file.txt');
var vStream = new FileStream();
vStream.open(vFile, FileMode.WRITE);
vStream.writeUTFBytes("Hello World");
vStream.close();
It will give SecurityError: fileWriteResource. However, if I use applicationStorageDirectory instead, the above code will work. It'll only NOT work if it's applicationDirectory. Moreover, Adobe's documentation also says that an AIR app cannot write to its applicationDirectory.
Now, I wonder if it's a bug on Adobe's part that they allow writing to the applicationDirectory using the way suggested by the accepted answer.
try this.
var objFile:File = new File(“file:///”+File.applicationDirectory.resolvePath(strFilePath).nativePath);
the output would be like this…
file:///c:\del\userConf.xml
This will work fine.
If you want write file into ApplicationDirectory, right?
Please don't forget for write for nativeprocess via powershell with registry key for your currect adobe application ( example: C:\Program Files (x86)\AirApp\AirApp.exe with RunAsAdmin )
nativeprocess saves new registry file
AirApp will restarts into RunASAdmin
AirApp can be writable possible with file :)
Don't worry!
I know that trick like sometimes application write frist via registry file and calls powershell by writing nativeprocess into registry file into registry structures.
Look like my suggestion from adobe system boards / forum was better than access problem with writing stream with file :)
I hope you because you know my nice trick with nativeprocess via powershell + regedit /s \AirApp.reg
and AirApp changes into administratived AirApp than it works fine with Administratived mode :)
Than your solution should write and you try - Make sure for your writing process by AirApp.
this function gives your current air application folder which bypasses the security problem:
function SWFName(): String {
var swfName: String;
var mySWF = new File(this.loaderInfo.url).nativePath;
swfName= this.loaderInfo.loaderURL;
swfName = swfName.slice(swfName.lastIndexOf("/") + 1); // Extract the filename from the url
swfName = new URLVariables("path=" + swfName).path; // this is a hack to decode URL-encoded values
mySWF = mySWF.replace(swfName, "");
return mySWF;
}