ServiceStack Razor behaviour when path doesn't exist - razor

I have these settings:
CustomHttpHandlers = {
{HttpStatusCode.NotFound, new RazorHandler("/notfound")},
{HttpStatusCode.Unauthorized, new RazorHandler("/unauthorized")},
}
When I visit something inside a /stars folder that doesn't exist:
/stars/asdf/xyz
It first checks for /stars/asdf/default.cshtml. Then goes to stars/default.cshtml and loads whichever level that has default page. So, only if /stars root folder doesn't exist at all, then /notfound would be loaded.
Is it possible to ask it to load /notfound when /asdf/xyz doesn't exist?
This is the behaviour under root directory:
http://localhost:2000/asdf will take you to /notfound. However, it doesn't do so under folders.
Tnank you.
EDIT ------------------------------------------------------
I noticed actually if I go to bad url /stars/asdf where /stars doesn't have a default but root /default.cshtml actually exists, in that case, both /notfound -> /default are loaded up one after other?!?
My settings are wrong? SS glitched?

ServiceStack's routing priority, is as follows. ServiceStack calls ServiceStackHttpHandlerFactory.GetHandler to get the handler for the current route.
ServiceStackHttpHandlerFactory.GetHandler returns:
A matching RawHttpHandler, if any.
If the domain root, the handler returned by GetCatchAllHandlerIfAny(...), if any.
If the route matches a metadata uri, the relevant handler, if any.
The handler returned by ServiceStackHttpHandlerFactory.GetHandlerForPathInfo if any.
NotFoundHandler.
ServiceStackHttpHandlerFactory.GetHandlerForPathInfo returns:
If the url matches a valid REST route, a new RestHandler.
If the url matches an existing file or directory, it returns
the handler returned by GetCatchAllHandlerIfAny(...), if any.
If it's a supported filetype, a StaticFileHandler,
If it's not a supported filetype, the ForbiddenHttpHandler.
The handler returned by GetCatchAllHandlerIfAny(...), if any.
null.
The CatchAllHandlers array contains functions that evaluate the url and either return a handler, or null. The functions in the array are called in sequence and the first one that doesn't return null handles the route.
The code that controls whether the default file is served is part of the StaticFileHandler. It's only called for existing files and directories.
Here's the relevent fragement:
foreach (var defaultDoc in EndpointHost.Config.DefaultDocuments)
{
var defaultFileName = Path.Combine(fi.FullName, defaultDoc);
if (!File.Exists(defaultFileName)) continue;
r.Redirect(request.GetPathUrl() + '/' + defaultDoc);
return;
}
As you can see, if the default file isn't found at the requested directory, it redirects up the directory chain until it finds a default file to serve. If you need to change this behavior, you can override it by adding a CatchAllHander you code. More details about writing a CatchAllHandler can be found in my answer to a related question, here: https://stackoverflow.com/a/17618851/149060

Related

Using require on a .json file will return an empty array if file is being required by another function at the same time

I don't have much experience with node but I've run into a pitfall when running code that handles I/O (a simplification of my code):
I have data.json which contains ['foo','bar'] with many functions that read and parse this file like so:
// foo.js
module.exports = function() {
// do stuff
var data = require("path/to/data.json");
return data;
};
// bar.js
module.exports = function() {
// do stuff
var data = require("path/to/data.json");
return data;
};
However when I call them:
// main.js
var foo = require('foo');
var bar = require('bar');
console.log(foo()); // gives ['foo','bar']
console.log(bar()); // gives []
I suspect while foo is reading data.json, it "locks" the file then preventing bar from reading it, but I'm not sure why bar still returns an empty array instead of undefined.
Using require to read a json file was a bad idea as I have them littered throughout my entire codebase. Is there an easy fix for something like this? What would be the preferred method of reading json files knowing that at any given moment that file might be accessed by another function?
As we know, require() will always cache the content of the loaded module (or file, in this case). The next time require() is called again, it will restore it from the cache instead of reading it again.
In your case, you was reading the file once for each foo.js and bar.js. Here, foo had changed a value from the JSON, and because it’s content was cached by require(), the changed value was loaded into bar, in which I was expecting it to be the original value. But, since foo already read it till end, you are at EOF. So, bar returns empty array [there is nothing to read, but file is valid].
SOLUTION:
Stick with fs module when reading JSON files.

Google Drive Windows App to/from fileId - items with same names, and multiple parents

I'm trying to translate from a Google Drive link on the web (well, the fileId anyway) to the Windows Google Drive app's path on the hard disk, and back again.
It would be helpful if there was something in the API for this (eg produce a path excluding the C:\Users\[User]\Google Drive\ from a file/folder ID, and vice versa), but there isn't.
So far I do:
Windows Path to ID: get the first folder of the path and (starting from the root) look for a matching folder, then repeat until finished (possibly with a file name). PROBLEM: Items can be called the same thing, whether files or folders or combinations of both, which is tricky in Windows. The app adds a number ' (1)' and so on, which I have to catch, but how can I know which item ID is the correct one? I believe that numbering is based on date but I'm not sure. So I can potentially end up with multiple results and no way to tell which is which.
ID to Windows Path: take the name of the file/folder from the ID, then keep adding the parent folder(s) until I build up a path. PROBLEM: same as 1 above, if there are multiple matching items then I can't tell which I should use when translating to Windows. PROBLEM: Apparently items in Google Drive can have more than one parent. Not sure how that works in the Windows app.
Can anyone help me fine tune how I do this, or tell me the exact details of how the Google Drive app does it? Code is welcome but not required, and I in turn can provide the code I use if needed.
I'm not sure if I fully understand the question, but I try to smack an answer anyway:
1/ assuming you have a Windows path,
C:\Users\User\Google Drive\myfile.ext
you create a file with a similar path on GooDrive iterating your path's tokens
recursively creating a tree structure on GooDrive. If the tree nodes (folders/files) exist, return ID's, otherwise create the objects. The main difference in GooDrive is that title query may return multiple objects (list of folders/files). Bad luck, you either use the first one or quit with an error.
global path = "C:\Users\User\Google Drive\myfile.ext"
createTree(String path) {
rootFolderId = create your root or use GooDrive root
fileId = iterate (firstToken(path, "\"), rootFolderId);
}
iterate(title, parentFolderId) {
ID (or multiple IDs) = search for title in parentFolderId
if (multiple IDs exist)
BOOM - report error and quit or use the first one
if (token not last) {
if (single ID for title exists) {
folderId = found ID
} else {
folderId = createFolder with title and parentFolderId metadata
}
iterate(nextToken(path, "\"), folderId)
} else { (last token represent file)
if (single ID for title exists) {
fileId = found ID
} else {
fileId = createFile with title and parentFolderId metadata
}
return fileId
}
}
You did not specify the language, but in case it is Java, you can see similar procedure here in the createTree() method (it is Android code, so there is a lot of Android specific goo there, sorry)
2/ assuming you have a Google Drive fileId, you construct the Windows path with this pseudocode (going from bottom up to the root). Again, you may have multiple parents you have to deal with (error or multiple paths with links to a single object)
String path = fileId's title
while () {
parentID = get fileId's parent
if (multiple parentIDs exist)
BOOM - report error and quit or construct multiple paths
(multiple paths would represent file/folder links)
if (parentID not valid or parentId's title not valid)
break
path = parentID's title + "\" + path
if (parentID's title is your root)
break
}
One more thing: You say "Folders and files can be called the same thing..."
In GooDrive, look at the MIME type, there is a specific MIME type "application/vnd.google-apps.folder" that tells you it is a folder. Also, any parentId metadata represents folder, since files can't be parents.
Good Luck

DocsList to DriveApp: Cannot find function addFile in object: Line 165

I know I am not alone out there in having some issues with Google's deprecation of DocsList in favor of DriveApp. I have replaced all references to DocList with DriveApp in my code.
I have a spreadsheet containing variables that used to merge nicely with a Google Docs template, effectively a mailmerge.
Here are the declared variables:
var myDataSheet, myVariablesSheet;
var rowId, maxRows, maxRowsOverride;
var templateDocId;
var timeZone, timestamp, dateline;
var newFolder, newFolderId, collectionDate, collectionName,appendTimestamp;
var newDocNameBase, newDocNameSuffixCol, newDocName, newDoc, newDocId;
var fieldColRow, fieldArr, colArr;
When I run the script, I am returning an error identified at line 165 of my code. The error states:
TypeError: Cannot find function addFile in object Copy of Letter of
Rep Template. (line 165, file "Mail Merge")
Line 165 reads:
DriveApp.getFileById(newDocId).addFile(DriveApp.getFolderByName(newFolderId));
Oddly, (at least to me) when I run the script I get a single outputted merged document, but never more than one.
I suspect that I am dealing with a failing loop of some kind, and an issue with my file naming and my destination folder, but I cannot get past where I am... any insight, help or just a straight fix greatly appreciated.
The getFolderByName() method returns a Folder Iterator. In other words, it returns an "object" that can have multiple elements in it. Even though you may only have one folder with a given name, you still need to use a loop to access the one folder.
Google Documentation - Folder Iterator
You are not going to be able to chain all the methods together that you have in that one line of code giving you the error.
I saw a few mistakes with your code.
DriveApp.getFolderByName(String) looks for folders with the given name, and returns multiple folders. You seemed to have passed in the folder's id. A folder's id and its name are different. You can create a folder named "MyFolder", and another folder called "MyFolder". Their names will be the same, but their id's will be unique.
You're calling File.addFile(Folder);, which doesn't exist. That's why you're getting that error. You should switch it around and use Folder.addFile(File) instead. Here's the fixed code:
DriveApp.getFolderById(newDocId).addFile(DriveApp.getFileById(newDocId));
This question is a bit old but I hope I could help :)

Using the Android Api, how to retrieve entire file path?

Given a com.box.androidlib.Utils.BoxUtils.BoxFolder object, I would like to recurse the object’s parent folders to retrieve the path from the root.
I would hope to do this with something like the below code, where currentBoxFolder is retrieved using Box.getAccountTree(…) as done in the Browse class of the included sample code. However, the getParentFolder returns null (for non-root folders for which I expect it to be non-null).
I figure that it might be possible to populate the parent variable by modfiying the source to fetch additional attributes, but I was able to. Any suggestions?
List<BoxFolder> parentDirs = new ArrayList<BoxFolder>();
parentDirs.add(new BoxFolderEntry(currentBoxFolder));
BoxFolder parent = currentBoxFolder.getParentFolder();
while(parent != null)
{
parentDirs.add(0, parent);
parent = parent.getParentFolder();
}
If the end goal is for you to know the path from the root to a folder, there are a couple ways to solve this:
OPTION 1:
Maintain your own map of folder_ids and folder_names as your application fetches them. Presumably, in order to get the id of currentBoxFolder, you would have had to do getAccountTree() calls on all its parents before-hand. So if that's the case, you could maintain 2 maps:
Folder ID => Parent Folder ID
Folder ID => Folder Name
From those two maps, you should always be able to get the path from the root.
OPTION 2:
There are 2 params that can be added to the Box.getAccountTree() method that will allow you to know the path:
"show_path_ids"
"show_path_names"
These params haven't been documented yet (we'll do that), but they will cause BoxFolder.getFolderPathIds() and BoxFolder.getFolderPath() to return values such as:
"/5435/4363"
"/blue folder/green folder"

How can I access the information associated to an object from a Mercurial plugin?

I am trying to write a small Mercurial extension, which, given the path to an object stored within the repository, it will tell you the revision it's at. So far, I'm working on the code from the WritingExtensions article, and I have something like this:
cmdtable = {
# cmd name function call
"whichrev": (whichrev,[],"hg whichrev FILE")
}
and the whichrev function has almost no code:
def whichrev(ui, repo, node, **opts):
# node will be the file chosen at the command line
pass
So , for example:
hg whichrev text_file.txt
Will call the whichrev function with node being set to text_file.txt. With the use of the debugger, I found that I can access a filelog object, by using this:
repo.file("text_file.txt")
But I don't know what I should access in order to get to the sha1 of the file.I have a feeling I may not be working with the right function.
Given a path to a tracked file ( the file may or may not appear as modified under hg status ), how can I get it's sha1 from my extension?
A filelog object is pretty low level, you probably want a filectx:
A filecontext object makes access to data related to a particular filerevision convenient.
You can get one through a changectx:
ctx = repo['.']
fooctx = ctx['foo']
print fooctx.filenode()
Or directly through the repo:
fooctx = repo.filectx('foo', '.')
Pass None instead of . to get the working copy ones.