Fatfree routing with PHP built-in web server - fat-free-framework

I'm learning fatfree's route and found it behaves unexpected.
Here is my code in index.php:
$f3 = require_once(dirname(dirname(__FILE__)). '/lib/base.php');
$f3 = \Base::instance();
echo 'received uri: '.$_SERVER['REQUEST_URI'].'<br>';
$f3->route('GET /brew/#count',
function($f3,$params) {
echo $params['count'].' bottles of beer on the wall.';
}
);
$f3->run();
and here is the URL which I access: http://xx.xx.xx.xx:8090/brew/12
I get a 404 error:
received uri: /brew/12
Not Found
HTTP 404 (GET /12)
the strange thing is that the URI in F3 is now "/12" instead of "/brew/12" and I guess this is the issue.
When I check the base.php (3.6.5), $this->hive['BASE'] = "/brew" and $this->hive['PATH'] = "/12".
But if F3 only uses $this->hive['PATH'] to match the predefined route, it won't be able to match them.
If I change the route to:
$f3->route('GET /brew',
and use the URL: http://xx.xx.xx.xx:8090/brew, then the route matches without issue.
In this case, $this->hive['BASE'] = "" and $this->hive['PATH'] = "/brew". If F3 compares the $this->hive['PATH'] with predefined route, they match each other.
BTW, I'm using PHP's built-in web server and since $_SERVER['REQUEST_URI'] (which is used by base.php) returns the correct URI, I don't think there is anything wrong with the URL rewrite in my .htrouter.php.
Any idea? What did I miss here?
add the content of .htrouter.php here
<?php
#get the relative URL
$uri = urldecode(parse_url($_SERVER['REQUEST_URI'], PHP_URL_PATH));
#if request to a real file (such as a html, image, js, css) then leave it as it is
if ($uri !== '/' && file_exists(__DIR__ . $uri)) {
return false;
}
#if request virtual URL then pass it to the bootstrap file - index.php
$_GET['_url'] = $_SERVER['REQUEST_URI'];
require_once __DIR__ . './public/index.php';

Your issue is directly related to the way you're using the PHP built-in web server.
As stated in the PHP docs, here's how the server handles requests:
URI requests are served from the current working directory where PHP was started, unless the -t option is used to specify an explicit document root. If a URI request does not specify a file, then either index.php or index.html in the given directory are returned. If neither file exists, the lookup for index.php and index.html will be continued in the parent directory and so on until one is found or the document root has been reached. If an index.php or index.html is found, it is returned and $_SERVER['PATH_INFO'] is set to the trailing part of the URI. Otherwise a 404 response code is returned.
If a PHP file is given on the command line when the web server is started it is treated as a "router" script. The script is run at the start of each HTTP request. If this script returns FALSE, then the requested resource is returned as-is. Otherwise the script's output is returned to the browser.
That means that, by default (without a router script), the web server is doing a pretty good job for routing unexisting URIs to your document root index.php file.
In other words, provided your file structure is like:
lib/
base.php
template.php
etc.
public/
index.php
The following command is enough to start your server and dispatch the requests properly to the framework:
php -S 0.0.0.0:8090 -t public/
Or if you're running the command directly from the public/ folder:
cd public
php -S 0.0.0.0:8090
Beware that the working directory of your application depends on the folder from which you call the command. In order to leverage this value, I strongly advise you to add chdir(__DIR__); at the top of your public/index.php file. This way, all subsequent require calls will be relative to your public/ folder. For ex: $f3 = require('../lib/base.php');
Routing file-style URIs
The built-in server, by default, won't pass unexisting file URIs to your index.php, as stated in:
If a URI request does not specify a file, then either index.php or index.html in the given directory are returned
So if you plan to define some routes with dots, such as:
$f3->route('GET /brew.json','Brew->json');
$f3->route('GET /brew.html','Brew->html');
Then it won't work because PHP won't pass the request to index.php.
In that case, you need to call a custom router, such as the .htrouter.php you were trying to use. The only thing is that your .htrouter.php has obviously been designed for a different framework (F3 doesn't care about $_GET['url'] but cares about $_SERVER['SCRIPT_NAME'].
Here's an exemple of .htrouter.php that should work with F3:
// public directory definition
$public_dir=__DIR__.'/public';
// serve existing files as-is
if (file_exists($public_dir.$_SERVER['REQUEST_URI']))
return FALSE;
// patch SCRIPT_NAME and pass the request to index.php
$_SERVER['SCRIPT_NAME']='index.php';
require($public_dir.'/index.php');
NB: the $public_dir variable should be set accordingly to the location of the .htrouter.php file.
For example if you call:
php -S 0.0.0.0:8090 -t public/ .htrouter.php
it should be $public_dir=__DIR__.'/public'.
But if you call:
cd public
php -S 0.0.0.0:8090 .htrouter.php
it should be $public_dir=__DIR__.

OK, I checked the base.php and found out when f3 calculates the base URI, it uses $_SERVER['SCRIPT_NAME'].
$base='';
if (!$cli)
$base=rtrim($this->fixslashes(
dirname($_SERVER['SCRIPT_NAME'])),'/');
if we have web server directly forward all requests to index.php, then
_SERVER['SCRIPT_NAME'] = /index.php, and in this this case, base is ''.
if we use URL rewriting via .htrouter.php to index.php, then
_SERVER['SCRIPT_NAME'] = /brew/12, and in this this case, base is '/brew' which causes the issue.
Since I'm going to use the URL rewrite, I have to comment out the if statement and make sure base =''.
Thanks xfra35 for providing the clue.

Apache like php router here:
It can url rewrite.
https://github.com/kyesil/QPHP/blob/master/router.php
Usage:
php -S localhost:8081 router.php

Related

Change swagger host and port serving it statically

Inspired by this sample repository, I'm generating a swagger output in json with protoc and serving it. However, for certain reasons I'm hosting the swagger content on a different port(:10000) than my REST api service(:8000).
I'm using the Go library statik to bundle up the swagger assets and serve them. It works, and a webpage is served when going to localhost:10000.
However, every cURL request swagger makes seems to be confined to just that - localhost:10000. The REST API lives on localhost:8081.
Serving swagger-ui with static content, how do I change the host/port for the REST api server?
I've tried going into the index.html of the swagger-ui content to add basePath as here, but with no luck. Every request is still made to :10000
window.onload = function() {
// Begin Swagger UI call region
const ui = SwaggerUIBundle({
url: "./service.swagger.json",
dom_id: '#swagger-ui',
deepLinking: true,
presets: [
SwaggerUIBundle.presets.apis,
SwaggerUIStandalonePreset
],
plugins: [
SwaggerUIBundle.plugins.DownloadUrl
],
layout: "StandaloneLayout",
// I added this, but it did not change anything.
basePath: "localhost:8081"
})
// End Swagger UI call region
window.ui = ui}
</script>
Add host with value localhost:8081
Remove basePath, basePath is used to change the relative path after host i.e if your web server is hosted at /v1/ etc, then you can use basepath to change that
i am still trying to find out how to pass host value dynamically for production, stage, dev env

Serve index file instead of download prompt

I have my website hosted on S3 with CloudFront as a CDN, and I need these two URLs to behave the same and to serve the index.html file within the directory:
example.com/directory
example.com/directory/
The one with the / at the end incorrectly prompts the browser to download a zero byte file with a random hash for the name of the file. Without the slash it returns my 404 page.
How can I get both paths to deliver the index.html file within the directory?
If there's a way I'm "supposed" to do this, great! That's what I'm hoping for, but if not I'll probably try to use Lambda#Edge to do a redirect. I need that for some other situations anyway, so some instructions on how to do a 301 or 302 redirect from Lambda#Edge would be helpful too : )
Update (as per John Hanley's Comment)
curl -i https://www.example.com/directory/
HTTP/2 200
content-type: application/x-directory
content-length: 0
date: Sat, 12 Jan 2019 22:07:47 GMT
last-modified: Wed, 31 Jan 2018 00:44:16 GMT
etag: "[id]"
accept-ranges: bytes
server: AmazonS3
x-cache: Miss from cloudfront
via: 1.1 [id].cloudfront.net (CloudFront)
x-amz-cf-id: [id]
Update
CloudFront has one behavior set, forwarding http to https and sending the requests to S3. It also has a 404 error route under the errors tab.
S3 only offers automatic index documents when you've enabled and are using the web site hosting features of the bucket, by pointing to the bucket's website hosting endpoint, ${bucket}.s3-website.${region}.amazonaws.com rather than the generic REST endpoint of the bucket, ${bucket}.s3.amazonaws.com.
Web site endpoints and REST endpoints have numerous differences, including this one.
The reason you're seeing these 0-byte files for object keys ending in / is because you are creating folder objects in the bucket using the S3 console or another utility that actually creates the 0-byte objects. They aren't needed, once the folders have objects "in" them -- but they're the only way to display an empty folder in the S3 console, which displays an object named foo/ as a folder named foo, even if there are no other objects with a key prefix of foo/. It's part of the visual emulation of a folder hierarchy in the console, even though objects in S3 are never really "in" folders.
If for some reason you need to use the REST endpoint -- such as you don't want to make the bucket public -- then you need two Lambda#Edge triggers in CloudFront, to emulate this functionality fairly closely.
An Origin Request trigger can inspect and modify requests after the CloudFront cache is checked, before the request is sent to the origin. We use this to check for a path ending in / and append index.html if we find that.
An Origin Response trigger can inspect and potentially modify responses, before they are written into the CloudFront cache. The Origin Response trigger can also inspect the original request that preceded the request that generated the response. We use this to check whether the response is an error. If it is, and the original request does not appear to be for an index document or a file (specifically, after the final slash in the path, a "file" has at least one character, followed by a dot, followed by at least one more character -- and if so, that's probably a "file"). If it's neither one of those things, we redirect to the original path plus a final / that we append.
Origin Request and Origin Response triggers fire only on cache misses. When there is a cache hit, neither trigger fires, because they are on the origin side of CloudFront -- the back side of the cache. Requests that can be served from the cache are served from the cache, so the triggers are not invoked.
The following is a Lambda#Edge function written in Node.js 8.10. This one Lambda function modifies its behavior so that it it behaves as either origin request or origin response, depending on context. After publishing a version in Lambda, associate that version's ARN with the CloudFront Cache Behavior settings as both an Origin Request and an Origin Response trigger.
'use strict';
// combination origin-request, origin-response trigger to emulate the S3
// website hosting index document functionality, while using the REST
// endpoint for the bucket
// https://stackoverflow.com/a/54263794/1695906
const INDEX_DOCUMENT = 'index.html'; // do not prepend a slash to this value
const HTTP_REDIRECT_CODE = '302'; // or use 301 or another code if desired
const HTTP_REDIRECT_MESSAGE = 'Found';
exports.handler = (event, context, callback) => {
const cf = event.Records[0].cf;
if(cf.config.eventType === 'origin-request')
{
// if path ends with '/' then append INDEX_DOCUMENT before sending to S3
if(cf.request.uri.endsWith('/'))
{
cf.request.uri = cf.request.uri + INDEX_DOCUMENT;
}
// return control to CloudFront, to send request to S3, whether or not
// we modified it; if we did, the modified URI will be requested.
return callback(null, cf.request);
}
else if(cf.config.eventType === 'origin-response')
{
// is the response 403 or 404? If not, we will return it unchanged.
if(cf.response.status.match(/^40[34]$/))
{
// it's an error.
// we're handling a response, but Lambda#Edge can still see the attributes of the request that generated this response; so, we
// check whether this is a page that should be redirected with a trailing slash appended. If it doesn't look like an index
// document request, already, and it doesn't end in a slash, and doesn't look like a filename with an extension... we'll try that.
// This is essentially what the S3 web site endpoint does if you hit a nonexistent key, so that the browser requests
// the index with the correct relative path, except that S3 checks whether it will actually work. We are using heuristics,
// rather than checking the bucket, but checking is an alternative.
if(!cf.request.uri.endsWith('/' + INDEX_DOCUMENT) && // not a failed request for an index document
!cf.request.uri.endsWith('/') && // unlikely, unless this code is modified to pass other things through on the request side
!cf.request.uri.match(/[^\/]+\.[^\/]+$/)) // doesn't look like a filename with an extension
{
// add the original error to the response headers, for reference/troubleshooting
cf.response.headers['x-redirect-reason'] = [{ key: 'X-Redirect-Reason', value: cf.response.status + ' ' + cf.response.statusDescription }];
// set the redirect code
cf.response.status = HTTP_REDIRECT_CODE;
cf.response.statusDescription = HTTP_REDIRECT_MESSAGE;
// set the Location header with the modified URI
// just append the '/', not the "index.html" -- the next request will trigger
// this function again, and it will be added without appearing in the
// browser's address bar.
cf.response.headers['location'] = [{ key: 'Location', value: cf.request.uri + '/' }];
// not strictly necessary, since browsers don't display it, but remove the response body with the S3 error XML in it
cf.response.body = '';
}
}
// return control to CloudFront, with either the original response, or
// the modified response, if we modified it.
return callback(null, cf.response);
}
else // this is not intended as a viewer-side trigger. Throw an exception, visible only in the Lambda CloudWatch logs and a 502 to the browser.
{
return callback(`Lambda function is incorrectly configured; triggered on '${cf.config.eventType}' but expected 'origin-request' or 'origin-response'`);
}
};
The answers given are wrong. Cloudfront has its own configuration to have www.yourdomain.com/ serve up a document. It's called "default root object" and its config is found under the "general" tab of your cloudfront distribution. Here are the full steps for getting an SSL/https-enabled custom domain + cloudfront + s3 bucket.
Create a brand new S3 bucket with default (closed-off) permissions or remove all public access from the target bucket.
Disable static website hosting. You don't need it.
If you haven't already, get your SSL cert into Amazon so you can attach it to the cloudfront distribution which will be pointing to your S3 bucket.
Create a cloudfront distribution pointing to the target S3 bucket, utilizing the cert.
For the origin configuration, use the www.yourdomain.com.s3.amazonaws.com form for the origin, NOT the static website hosting URL (which should be disabled anyway).
Let the cloudfront config automatically change the S3 bucket access ("restrict bucket access"). You want access to the bucket restricted to this cloudfront distribution ONLY (via a specific identity). No one should be hitting your S3 bucket directly, especially since it can serve via http (no "s").
Under the cloudfront "general" tab (or during setup) set your default root object to "index.html" or whatever. Otherwise, requests to https://www.yourdomain.com/ will show permission denied.
Recently AWS has recently launched CloudFront Functions which can be used for this use case.
CloudFront Functions are cheaper, faster and easier to implement and test compared to Lambda#Edge.
Below is a sample function to attach index.html to the request if it is not provided while accessing the path.
function handler(event) {
var request = event.request;
var uri = request.uri;
// Check whether the URI is missing a file name.
if (uri.endsWith('/')) {
request.uri += 'index.html';
}
// Check whether the URI is missing a file extension.
else if (!uri.includes('.')) {
request.uri += '/index.html';
}
return request;
}
This will not append index.html in the web browser address bar, which gives a cleaner URL while browsing. In your case https://www.example.com/directory/ will remain as such while browsing, but will render the content of https://www.example.com/directory/index.html.
More samples can be found in https://github.com/aws-samples/amazon-cloudfront-functions/blob/main/url-rewrite-single-page-apps/index.js
This type of behavior is usually controlled/caused by your HTTP(s) header data, specifically, the Content-Type that your client receives.
Inspect the header and try tweaking what gets returned from your server. That should lead to your solution.
In Chrome, visit a URL, right click, select Inspect to open the developer tools.
Select Network tab.
Reload the page, select any HTTP request on the left panel, and the HTTP headers will be displayed on the right panel.

Understanding JSON-RPC in Perl

I am trying to understand the concept of JSON RPC and it's Perl implementation. Though I can fin d a lot of examples for Python/Java, I find surprisingly little or no examples for it in Perl.
I am following this example but am not sure it is complete. The example I had in mind was to add 2 integers. Now I have a very basic HTML page set up, like so:
<html>
<body>
<input type="text" name="num1"><br>
<input type="text" name="num2"><br>
<button>Add</button>
</body>
</html>
Next, based on the example above, I have 3 files:
test1.pl
# Daemon version
use JSON::RPC::Server::Daemon;
# see documentation at:
# https://metacpan.org/pod/distribution/JSON-RPC/lib/JSON/RPC/Legacy.pm
my $server = JSON::RPC::Server::Daemon->new(LocalPort => 8080);
$server -> dispatch({'/test' => 'myApp'});
$server -> handle();
test2.pl
#!/usr/bin/perl
use JSON::RPC::Client;
my $client = new JSON::RPC::Client;
my $uri = 'http://localhost:8080/test';
my $obj = {
method => 'sum', # or 'MyApp.sum'
params => [10, 20],
};
my $res = $client->call( $uri, $obj );
if($res){
if ($res->is_error) {
print "Error : ", $res->error_message;
} else {
print $res->result;
}
} else {
print $client->status_line;
}
myApp.pl
package myApp;
#optionally, you can also
use base qw(JSON::RPC::Procedure); # for :Public and :Private attributes
sub sum : Public(a:num, b:num) {
my ($s, $obj) = #_;
return $obj->{a} + $obj->{b};
}
1;
While I understand what these files individually do, I am at a complete loss when it comes to combining them and making them work together.
My questions are as follows:
Does the button in the HTML page come inside a tag (like we would normally do in a CGI-based program)? If yes, what file does that call? If no, then how do I pass the values to be added?
What is the order of execution of the 3 Perl files? Which one calls which one? How is the flow of execution?
When I tried to run the perl files from the CLI, i.e using $./test2.pl, I got the following error: Error 301 Moved Permanently. What moved permanently? which file was it trying to access? I tried running the files from withing /var/www/html and /var/www/html/test.
Some help in understanding the nuances of this would really be appreciated. Thanks in advance
Does the button in the HTML page come inside a tag (like we would
*normally do in a CGI-based program)? If yes, what file does that call?*
If no, then how do I pass the values to be added?
HTML has nothing at all to do with JSON-RPC. While the RPC call is done via an HTTP POST request, if you're doing that from the browser, you'll need to use XMLHttpRequest (i.e: AJAX). Unlink an HTML form post the Content-encoding: header will need to be something specific to JSON-RPC (e.g: application/json or similar), and you'll need to encode your form data via JSON.stringify and correctly construct the JSON-RPC "envelope", including the id, jsonrpc, method and params properties.
Rather than doing this by hand you might use a purpose-build JSON-RPC JavaScript client like the jQuery-JSONRP plugin (there are many others) -- although the protocol is so simple that implementations usually are less than 20 lines of code.
From the jQuery-RPC documentation, you'd set up the connection like this:
$.jsonRPC.setup({
endPoint: '/ENDPOINT-ROUTE-GOES-HERE'
});
and you'd call the server-side method like this:
$.jsonRPC.request('sum', {
params: [YOURNUMBERINPUTELEMENT1.value, YOURNUMBERINPUT2.value],
success: function(result) {
/* Do something with the result here */
},
error: function(result) {
/* Result is an RPC 2.0 compatible response object */
}
});
What is the order of execution of the 3 Perl files? Which one calls
*which one? How is the flow of execution?*
You'll likely only need test2.pl for testing. It's an example implementation of a JSON-RPC client. You likely want your client to run in your web-browser (as described above). The client JavaScript will make an HTTP POST request to wherever test1.pl is serving content. (e.g: http://localhost:8080).
Or, if you want to keep your code as HTML<-->CGI, then you'll need to make JSON-RPC client calls from within your Perl CGI server-side code (which seems silly if it's on the same machine).
When test1.pl calls dispatch, the MyApp module will be loaded.
Then, when test1.pl calls handle, the sum function in the MyApp package will be called.
The JSON::RPC::Server module takes care of marshalling from JSON-RPC to perl datastructures and back again around the call to handle. die()ing in sum should result in a JSON-RPC exception being transmitted to the calling client, rather than death of the test1.pl script.
When I tried to run the perl files from the CLI, i.e using
*$./test2.pl, I got the following error: Error 301 Moved Permanently.*
What moved permanently? which file was it trying to access? I tried
*running the files from withing /var/www/html and /var/www/html/test.*
This largely depends the configuration of your machine. There's nothing obvious (in your code) to suggest that a 301 Moved Permanently would be issued in response to a valid JSON-RPC request.

Retrieving HTTP URLs using Perl scripting

I'm trying to save the whole web page on my system as a .html file and then parse that file, to find some tags and use them.
I'm able to save/parse http://<url>, but not able to save/parse https://<url>. I'm using Perl.
I'm using the following code to save HTTP and it works fine but doesn't work for HTTPS:
use strict;
use warnings;
use LWP::Simple qw($ua get);
use LWP::UserAgent;
use LWP::Protocol::https;
use HTTP::Cookies;
sub main
{
my $ua = LWP::UserAgent->new();
my $cookies = HTTP::Cookies->new(
file => "cookies.txt",
autosave => 1,
);
$ua->cookie_jar($cookies);
$ua->agent("Google Chrome/30");
#$ua->ssl_opts( SSL_ca_file => 'cert.pfx' );
$ua->proxy('http','http://proxy.com');
my $response = $ua->get('http://google.com');
#$ua->credentials($response, "", "usrname", "password");
unless($response->is_success) {
print "Error: " . $response->status_line;
}
# Let's save the output.
my $save = "save.html";
unless(open SAVE, '>' . $save) {
die "nCannot create save file '$save'n";
}
# Without this line, we may get a
# 'wide characters in print' warning.
binmode(SAVE, ":utf8");
print SAVE $response->decoded_content;
close SAVE;
print "Saved ",
length($response->decoded_content),
" bytes of data to '$save'.";
}
main();
Is it possible to parse an HTTPS page?
Always worth checking the documentation for the modules that you're using...
You're using modules from libwww-perl. That includes a cookbook. And in that cookbook, there is a section about HTTPS, which says:
URLs with https scheme are accessed in exactly the same way as with
http scheme, provided that an SSL interface module for LWP has been
properly installed (see the README.SSL file found in the libwww-perl
distribution for more details). If no SSL interface is installed for
LWP to use, then you will get "501 Protocol scheme 'https' is not
supported" errors when accessing such URLs.
The README.SSL file says this:
As of libwww-perl v6.02 you need to install the LWP::Protocol::https
module from its own separate distribution to enable support for
https://... URLs for LWP::UserAgent.
So you just need to install LWP::Protocol::https.
You need to have https://metacpan.org/module/Crypt::SSLeay for https links
It provides SSL support for LWP.
Bit me in the ass with a project of my own.

Importing local json file using d3.json does not work

I try to import a local .json-file using d3.json().
The file filename.json is stored in the same folder as my html file.
Yet the (json)-parameter is null.
d3.json("filename.json", function(json) {
root = json;
root.x0 = h / 2;
root.y0 = 0;});
. . .
}
My code is basically the same as in this d3.js example
If you're running in a browser, you cannot load local files.
But it's fairly easy to run a dev server, on the commandline, simply cd into the directory with your files, then:
python -m SimpleHTTPServer
(or python -m http.server using python 3)
Now in your browser, go to localhost:3000 (or :8000 or whatever is shown on the commandline).
The following used to work in older versions of d3:
var json = {"my": "json"};
d3.json(json, function(json) {
root = json;
root.x0 = h / 2;
root.y0 = 0;
});
In version d3.v5, you should do it as
d3.json("file.json").then(function(data){ console.log(data)});
Similarly, with csv and other file formats.
You can find more details at https://github.com/d3/d3/blob/master/CHANGES.md
Adding to the previous answers it's simpler to use an HTTP server provided by most Linux/ Mac machines (just by having python installed).
Run the following command in the root of your project
python -m SimpleHTTPServer
Then instead of accessing file://.....index.html open your browser on http://localhost:8080 or the port provided by running the server. This way will make the browser fetch all the files in your project without being blocked.
http://bl.ocks.org/eyaler/10586116
Refer to this code, this is reading from a file and creating a graph.
I also had the same problem, but later I figured out that the problem was in the json file I was using(an extra comma). If you are getting null here try printing the error you are getting, like this may be.
d3.json("filename.json", function(error, graph) {
alert(error)
})
This is working in firefox, in chrome somehow its not printing the error.
Loading a local csv or json file with (d3)js is not safe to do. They prevent you from doing it. There are some solutions to get it working though. The following line basically does not work (csv or json) because it is a local import:
d3.csv("path_to_your_csv", function(data) {console.log(data) });
Solution 1:
Disable the security in your browser
Different browsers have different security setting that you can disable. This solution can work and you can load your files. Disabling is however not advisable. It will make you vulnerable for all kind of threads. On the other hand, who is going to use your software if you tell them to manually disable the security?
Disable the security in Chrome:
--disable-web-security
--allow-file-access-from-files
Solution 2:
Load your csv/json file from a website.
This may seem like a weird solution but it will work. It is an easy fix but can be unpractical though. See here for an example. Check out the page-source. This is the idea:
d3.csv("https://path_to_your_csv", function(data) {console.log(data) });
Solution 3:
Start you own browser, with e.g. Python.
Such a browser does not include all kind of security checks. This may be a solution when you experiment with your code on your own machine. In many cases, this may not be the solution when you have users. This example will serve HTTP on port 8888 unless it is already taken:
python -m http.server 8888
python -m SimpleHTTPServer 8888 &
Open the (Chrome) browser address bar and type the underneath. This will open the index.html. In case you have a different name, type the path to that local HTML page.
localhost:8888
Solution 4:
Use local-host and CORS
You may can use local-host and CORS but the approach is not user-friendly coz setting up this, may not be so straightforward.
Solution 5:
Embed your data in the HTML file
I like this solution the most. Instead of loading your csv, you can write a script that embeds your data directly in the html. This will allow users use their favorite browser, and there are no security issues. This solution may not be so elegant because your html file can grow very hard depending on your data but it will work though. See here for an example. Check out the page-source.
Remove this line:
d3.csv("path_to_your_csv", function(data) { })
Replace with this:
var data =
[
$DATA_COMES_HERE$
]
You can't readily read local files, at least not in Chrome, and possibly not in other browsers either.
The simplest workaround is to simply include your JSON data in your script file and then simply get rid of your d3.json call and keep the code in the callback you pass to it.
Your code would then look like this:
json = { ... };
root = json;
root.x0 = h / 2;
root.y0 = 0;
...
I have used this
d3.json("graph.json", function(error, xyz) {
if (error) throw error;
// the rest of my d3 graph code here
}
so you can refer to your json file by using the variable xyz and graph is the name of my local json file
Use resource as local variable
var filename = {x0:0,y0:0};
//you can change different name for the function than json
d3.json = (x,cb)=>cb.call(null,x);
d3.json(filename, function(json) {
root = json;
root.x0 = h / 2;
root.y0 = 0;});
//...
}