How to make a web server using Erlang - mysql

I'm new with Erlang
I try to make a web server with Erlang. How to do it with Erlang?
I was using this code to make a local:
-module(test).
-export([start/0,service/3]).
start() ->
inets:start(httpd, [
{modules, [
mod_auth,
mod_esi,
mod_actions,
mod_cgi,
mod_dir,
mod_get,
mod_head,
mod_log,
mod_disk_log
]},
{port,8082},
{server_name,"helloworld"},
{server_root,"C://xampp//tmp"},
{document_root,"C://xampp//htdocs"},
{erl_script_alias, {"/erl", [test]}},
{error_log, "error.log"},
{security_log, "security.log"},
{transfer_log, "transfer.log"},
{mime_types,[
{"html","text/html"}, {"css","text/css"}, {"js","application/x-javascript"} ]}
]).
service(SessionID, _Env, _Input) -> mod_esi:deliver(SessionID, [
"Content-Type: text/html\r\n\r\n",
"<DOCTYPE html>
<head>
<meta charset='utf-8'>
<meta http-equiv='X-UA-Compatible' content='IE=edge'>
<meta name='viewport' content='width=device-width, initial-scale=1'>
<title>HTML1</title>
<script
src='https://code.jquery.com/jquery-3.2.1.js'
integrity='sha256-DZAnKJ/6XZ9si04Hgrsxu/8s717jcIzLy3oi35EouyE='
crossorigin='anonymous'></script>
<link href='https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css' rel='stylesheet'/>
<link href='css/test1.css' rel='stylesheet'/>
<script src='https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js'></script>
</head>
<html>
<body>Ham oc cho!
<div class='header'>
<ul class='first'>
<li class='col-md-4'><a href='#' >Tai khoan cua toi</a></li>
<li class='col-md-4'><a href='#' >Trang thai don hang</a></li>
<li class='col-md-4'><a href='#' >Danh sach ua thich</a></li>
<li class='col-md-4'><a href='#' >Gio hang</a></li>
<li class='col-md-4'><a href='#' >Dang nhap</a></li>
<li class='col-md-4'><a href='#' >Dang ky</a></li>
</ul>
</div>
</body>
</html>" ]).
But I don't see any way to add a css-js file and don't know how to write a backend to this.
If you guy have some example or document pls share me

There is some useful Erlang tools like Cowboy, Mochiweb, Chicagoboss and YAWS for working on web protocols.

You might find it instructive to work through sws, my Erlang simple web server. It shows how to handle connections, read HTTP requests from a socket, and send replies using Erlang's built-in socket support and HTTP support.
The web server works by accepting incoming connections and parsing incoming requests using Erlang's built-in support for HTTP request parsing — see line 29:
ok = inet:setopts(S, [{packet,http_bin}]),
The {packet, http_bin} socket option tells Erlang to try to parse incoming socket data as HTTP. In the serve/3 function at line 36, for flow control and backpressure purposes, we keep the socket in {active, once} mode, which also means sws receives incoming data from Erlang as messages — see lines 37-41:
ok = inet:setopts(S, [{active, once}]),
HttpMsg = receive
{http, S, Msg} -> Msg;
_ -> gen_tcp:close(S)
end,
The serve/3 function is recursive, receiving HTTP request data until we get a full request or an error. Once serve/3 has a full request, it passes it to a handler function, which you're expected to provide when you call sws:start/1,2. The handler is expected to return a 3-tuple of HTTP status, HTTP reply headers, and HTTP reply body, where the headers or body can be empty depending on the return status.
For example, here's a simple "Hello, World!" application running in an Erlang shell:
1> c(sws).
{ok,sws}
2> Sws = spawn(sws, start, [fun(_,_) -> {200, [], <<"Hello, World!">>} end]).
<0.73.0>
Here, the fun passed as a handler always returns HTTP status 200, no reply headers, and a string binary for the reply body. Accessing the server via curl from a Unix shell shows the expected reply:
$ curl http://localhost:8000
Hello, World!
If we pass -v to curl to show more details, we see:
$ curl -v http://localhost:8000
* Rebuilt URL to: http://localhost:8000/
* Trying ::1...
* TCP_NODELAY set
* Connection failed
* connect to ::1 port 8000 failed: Connection refused
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8000 (#0)
> GET / HTTP/1.1
> Host: localhost:8000
> User-Agent: curl/7.51.0
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200
<
* Curl_http_done: called premature == 0
* Closing connection 0
Hello, World!
First curl tries to connect over IPv6, which fails since sws doesn't support it (though it could), so it retries over IPv4, which succeeds. Curl then sends a GET request for the / resource. When curl sees the reply, it shows the 200 status code, and also note that it sees that the reply is HTTP 1.0 and thus correctly assumes the connection will close after the body is sent, so after receiving the reply it closes its side as well.
The handler function you supply takes two arguments: the client socket and a request object, which is a property list consisting of 2-tuples where the first tuple element is an atom identifying its associated data. For example, the handler can determine the invoked HTTP method by finding the method tuple in the Request argument using lists:keyfind/3:
{method, Method} = lists:keyfind(method, 1, Request),
For our example above, Method would have the value of 'GET' (an atom). Other properties of the request that can be discovered like this are:
uri for the requested resource
version for the client HTTP version
headers for a list of the HTTP headers in the request
The handler function you supply can be as simple or complex as you wish. Note that if your handler fails and causes an exception, sws catches it and returns HTTP status code 500.
To stop the web server, back in the Erlang shell we send a stop message to the spawned sws process:
3> Sws ! stop.
stop
=ERROR REPORT==== 19-Jul-2017::11:17:05 ===
Error in process <0.77.0> with exit value:
{{badmatch,{error,closed}},[{sws,accept,2,[{file,"sws.erl"},{line,28}]}]}
The error shown here, which can be ignored, is simply due to the fact that sws always assumes that gen_tcp:accept/1 succeeds — see line 28:
{ok, S} = gen_tcp:accept(LS),
It would be easy enough to make this a case expression instead and handle error returns as well.
Note that sws is intended for demonstration and learning, and so it's intentionally not particularly efficient since it supports HTTP 1.0 only and handles only one request per connection.

Consider using http://phoenixframework.org/
It uses elixir which runs on the Erlang VM.

Related

How XMLHttpRequest behaves in combination with same-origin policy

I've a basic HTML, hosted in www.foodomain.com, with a simple script that just tries to make a POST call to a site located to another domain (www.bardomain.com), in order to provoke an action to be performed on that site. The attacker.html file is:
The hosting
<html lang="en">
<head>
<meta charset="utf-8">
<title>Attacker.html</title>
<script language="JavaScript" type="text/javascript">
var http = new XMLHttpRequest();
var url = "http://www.bardomain.com/attacked.php";
var params = "action=deleteAll";
http.open("POST", url, true);
//Send the proper header information along with the request
http.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
http.setRequestHeader("Content-length", params.length);
http.setRequestHeader("Connection", "close");
http.onreadystatechange = function() {//Call a function when the state changes.
if (http.readyState == 4 && http.status == 200) {
alert("fet");
}
}
http.send(params);
</script>
</head>
<body>
Some presentation text ...
</body>
</html>
As far as I know that behaviour should be blocked by the web browser due to its same-origin policy, but in fact the POST call to the www.bardomain.com site is done, though the action is never accomplished because the apache server sends an HTTP 302 message:
www.bardomain.com:80 192.168.56.1 - - [14/Dec/2014:12:57:30 +0100] "POST /attacked.php HTTP/1.1" 302 509 "http://www.foodomain/attacker.html" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:34.0) Gecko/20100101 Firefox/34.0"
Since it's a HTTP 302 response the action is not really done, but I didn't even expect the request to be sent by the browser (since it's to another domain). I'd really appreciate if anybody could give me an explanation to this behaviour.
On the other hand, another curious behavior occurs if instead of accessing the attacker.html file from apache, I just load the file in the Eclipse web brower, the POST message is sent and returns an HTTP 200 message, so the action is performed in the www.bardomain.com:
www.bardomain.com:80 192.168.56.1 - - [14/Dec/2014:13:20:52 +0100] "POST /attacked.php HTTP/1.1" 200 1586 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.78.2 (KHTML, like Gecko) Safari/522.0"
Any explanation to those behaviours?
Sending information is not blocked, but the other end may not send you back anything. Also, your browser may not use scripts loaded from outside your host domain if the CORS headers are not found with the returning payload.
In other words, the whole Cross Site Request Security is a two way hand shake before it works. The client (your browser) typically makes a request with the verb OPTIONS to verify it will be allowed to request the script in the first place. The server (other end) then replies with the VERBS you can use to request stuff from outside its domain.
Then, when you actually request the asset, the client provides HOST information for the server to validate and if all is well, other headers will be added to the return telling the client that the information was requested and okay to use.
http://en.wikipedia.org/wiki/Cross-origin_resource_sharing

WebSocket permessage-deflate in Chrome with no context takeover

I have this problem with compression, and I am not sure if it is a bug. My WebSocket server does not support context takeover, and I am having problems sending messages, but not receiving.
The browser issues a request like this:
GET /socket HTTP/1.1
Host: thirdparty.com
Origin: http://example.com
Connection: Upgrade
Upgrade: websocket
Sec-WebSocket-Version: 13
Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==
Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits, x-webkit-deflate-frame
If the server does not specify any option about context takeover:
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Access-Control-Allow-Origin: http://example.com
Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo=
Sec-WebSocket-Extensions: permessage-deflate
I can read and write the first message, but cannot do subsequent reads or writes, because Chrome expects the server is keeping the context.
So my server provides this answer:
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Access-Control-Allow-Origin: http://example.com
Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo=
Sec-WebSocket-Extensions: permessage-deflate; client_no_context_takeover; server_no_context_takeover
And now I can receive messages without problems, but again, I can only send the first message, the second message fails, and I see an error in Chrome saying that it failed at inflating the frame. I tried to send two identical strings, and I can see how the server is sending twice the same data, but the client fails to decompress it the second time.
So it seems that Chrome accepts the client_no_context_takeover parameter that specify that the client won't use the same compression context for all messages when compressing, but ignores server_no_context_takeover indicates the server won't use the same context.
Is this a bug in Chrome? I am not clear about if I can send options back that have not been offered/requested by the client.
Is there any other option I can use to disable the client context takeover?
UPDATE:
In WebSocketPerMessageDeflate.cpp in the Chromium source code, I can see:
if (clientNoContextTakeover != parameters.end()) {
if (!clientNoContextTakeover->value.isNull()) {
m_failureReason = "Received invalid client_no_context_takeover parameter";
return false;
}
mode = WebSocketDeflater::DoNotTakeOverContext;
++numProcessedParameters;
}
But also:
if (serverNoContextTakeover != parameters.end()) {
if (!serverNoContextTakeover->value.isNull()) {
m_failureReason = "Received invalid server_no_context_takeover parameter";
return false;
}
++numProcessedParameters;
}
In the first snippet, it is setting the "mode" variable, but in the second one is not doing nothing, so it seems it is basically ignoring the parameter.
Cheers.
A server must send a server_no_context_takeover parameter in a response only if the client requested the "no context takeover". In essence, the server acknowledges the client's request.
If a server decides to do "no context takeover" for sending on it's own (without the client having requested it), that's fine. In this case, no parameter is sent by the server.
A deflate sender can always on it's own drop compression context and/or reduce compression window size. There is no need to tell the receiver. The deflate wire format has enough information for the receiver to cope with that.
Here is how configuration and handshake looks with Crossbar.io.
I finally found the problem.
https://datatracker.ietf.org/doc/html/draft-ietf-hybi-permessage-compression-17#section-8.2.3
8.2.3.4. Using a DEFLATE Block with BFINAL Set to 1
Going through the examples in the draft I found my server was sending slightly different payloads. Turned out that the problem was the BFINAL, I need to set it to 0 by adding a 0 byte at the end.
Now it works.

Unable to send lenghty JSON string to ASP.NET HttpHandler using jquery ajax call (IIS 6)

BACKGROUND:
I have an Http handler which receives SIM/CDMA numbers ('8953502103000101242') in json string format from jQuery Ajax call using http POST method to activate them on server.
THE PROBLEM:
On Local Development Environment I can send up to 65K SIMS' Numbers (8953502103000101242 key/value pair in json string) BUT when I deploy to LIVE server then I faced the following problem.
If I send 2000 SIMS numbers separated by comma to Http handler then Http handlers receives http request successfully But when I send more than that (3000, 4000, 5000 SIM numbers in json string) then http request doesn’t reach to HttpHandler at server even after few hours.
If I send 5000 sim numbers in http POST using jquery then the total size request sent is 138.0 KB. and it should be passed to server because server maxRequestLength is 2048576. but it is not being sent to server when we deploy it on live server and it work fine on local development environment.
TRIED SOLUTION:
I tried to resolve the problem by editing httpRuntime configuration in web.config as follows
By using above httpRuntime configuration I noticed in Firefox firebug net state that It keeps waiting sending the request as executionTimeout="7200" but if executionTimeout="600" then it returns timeout error.
ITENDED SOLUTION:
I think if I try to sync above httpRuntime element in Machine.config file as well then it work fine.
REQUIRED SOLUIOTN
WHAT CAN THE BE PROBLEM AND HOW TO RESOLVE IT. PLEASE SUGGEST IN THIS REGARD. WAITING FOR A QUICK SOLUTION.
Jquery call to HttpHandler is as follows:
$.ajax({
type: "POST",
url: "../../HttpHandlers/PorthosCommonHandler.ashx",
data: { order: JSON.stringify(orderObject), method: "ValidateOrderInput", orgId: sCId, userId: uId, languageId: lId },
success: function(response) {
var orderResult = new dojo.data.ItemFileReadStore({
data: {
jsId: "jsorderResult",
id: "orderResult",
items: response.Data
}
});
});
UPDATE
I have diagnosed the problem. one problem was executionTimeout configuration in web.config. and the after making this change second problem was due to long operation time the request was being interrupted by internet (network communication). I made sure I have reliable internet connectivity and tested it again and it worked.
BUT Now i am facing another problem. My httphandler send request to a webservice and I am getting following exception in response.
The operation has timed out
I have fixed the problem.
one problem was executionTimeout configuration in web.config. and the after making this change second problem was due to long operation time the request was being interrupted by internet (network communication). I make sure I have reliable internet connectivity and tested it again.
configured the webserivces as follows.
<httpRuntime executionTimeout="7200" enable="true" maxRequestLength="2048576" useFullyQualifiedRedirectUrl="false" />
and
[WebMethod(Description = "Delete template",BufferResponse = false)]
Specifying "BufferResponse=false" indicates that .NET should begin sending the response to the client as soon as any part of the response becomes available, instead of waiting for the entire response to become available.

Game + Web Server using ExpressJS

I'm currently trying to develop a simple Flash game which talks to a node.js server.
My question is this:
How might I go about making a server which differentiates web requests from game requests?
Here are the details of what I've done:
Previously, I used the net and static modules to handle requests from the game client and the browser, respectively.
TwoServers.js
// Web server
var file = new staticModule.Server('./public');
http.createServer(function(req, res){
req.addListener('end', function(){
file.serve(req, res, function(err, result){
// do something
});
});
}).listen(port1, "127.0.0.1");
// Game Server
var server = net.createServer(function(socket)
{
// handle messages to/from Flash client
socket.setEncoding('utf8');
socket.write('foo');
socket.on('data', onMessageReceived);
});
server.listen(port2, "127.0.0.1");
I'd like to do the above with just an Express server listening in on a single port, but I'm not sure how to go about doing that.
Here's what I'm thinking it might look like (doesn't actually work):
OneServer.js
var app = express();
app.configure(function()
{
// ...
app.use('/',express.static(path.join(__dirname, 'public'))); // The static server
});
app.get('/', function(req, res) // This is incorrect (expects http requests)
{
// Handle messages to/from Flash client
var socket = req.connection;
socket.setEncoding('utf8');
socket.write('foo');
socket.on('data', onMessageReceived);
});
app.listen(app.get('port')); // Listen in on a single port
But I'd like to be able to differentiate from web page requests and requests from the game.
Note: Actionscript's XMLSocket makes TCP requests, so using app.get('/') is incorrect for two reasons:
When Flash writes to the socket, it isn't using the http protocol, so app.get('/') will not be fired when the game tries to connect.
Since I don't have access to correct the net.Socket object, I cannot expect to be reading or writing from/to the correct socket. Instead, I'll be reading/writing from/to the socket associated with the web page requests.
Any help on this would be much appreciated (especially if I'm reasoning about this the wrong way).
When a TCP connection is opened to a given port, the server (Node + Express) has no way of telling who made that connection (whether it's a browser or your custom client).
Therefore, your custom client must speak HTTP if it wishes to communicate with the Express server sitting on port 80. Otherwise, the data you send over a freshly opened socket (in your custom protocol) will just look like garbage to Express, and it will close the connection.
However, this doesn't mean you can't get a TCP stream to speak a custom protocol over – you just have to speak HTTP first and ask to switch protocols. HTTP provides a mechanism exactly to accomplish this (the Upgrade header), and in fact it is how WebSockets are implemented.
When your Flash client first opens a TCP connection to your server, it should send: (note line breaks MUST be sent as CRLF characters, aka \r\n)
GET /gamesocket HTTP/1.1
Upgrade: x-my-custom-protocol/1.0
Host: example.com
Cache-Control: no-cache
​
The value of Upgrade is your choice, Host MUST be sent for all HTTP requests, and the Cache-Control header ensures no intermediate proxies service this request. Notice the blank line, which indicates the request is complete.
The server responds:
HTTP/1.1 101 Switching Protocols
Upgrade: x-my-custom-protocol/1.0
Connection: Upgrade
​
Again, a blank line indicates the headers are complete, and after that final CRLF, you are now free to send any data you like in any format over the TCP connection.
To implement the server side of this:
app.get('/gamesocket', function(req, res) {
if (req.get('Upgrade') == 'x-my-custom-protocol/1.0') {
res.writeHead(101, { Upgrade: req.get('Upgrade'), Connection: 'Upgrade' });
// `req.connection` is the raw net.Socket object
req.connection.removeAllListeners(); // make sure Express doesn't listen to the data anymore... we've got it from here!
// now you can do whatever with the socket
req.connection.setEncoding('utf8');
req.connection.write('foo');
req.connection.on('data', onMessageReceived);
} else res.send(400); // bad request
});
Of course, remember that TCP is not a message-based protocol, it only provides a stream, and thus the data events of a Socket can either fragment a single logical message into multiple events or even include several logical messages in a single event. Be prepared to manually buffer data.
Your other option here is to use socket.io, which implements a WebSockets server plus its own protocol on top of the WebSockets protocol. The WebSockets protocol is message-based. It mostly works just like I've outlined here, and then after HTTP negotiation adds a message framing layer on top of the TCP connection so that the application doesn't have to worry about the data stream. (Using WebSockets also opens the possibility of connecting to your server from a HTML page if necessary.)
There is a Flash socket.io client available.

Help with HTTP Intercepting Proxy in Ruby?

I have the beginnings of an HTTP Intercepting Proxy written in Ruby:
require 'socket' # Get sockets from stdlib
server = TCPServer.open(8080) # Socket to listen on port 8080
loop { # Servers run forever
Thread.start(server.accept) do |client|
puts "** Got connection!"
#output = ""
#host = ""
#port = 80
while line = client.gets
line.chomp!
if (line =~ /^(GET|CONNECT) .*(\.com|\.net):(.*) (HTTP\/1.1|HTTP\/1.0)$/)
#port = $3
elsif (line =~ /^Host: (.*)$/ && #host == "")
#host = $1
end
print line + "\n"
#output += line + "\n"
# This *may* cause problems with not getting full requests,
# but without this, the loop never returns.
break if line == ""
end
if (#host != "")
puts "** Got host! (#{#host}:#{#port})"
out = TCPSocket.open(#host, #port)
puts "** Got destination!"
out.print(#output)
while line = out.gets
line.chomp!
if (line =~ /^<proxyinfo>.*<\/proxyinfo>$/)
# Logic is done here.
end
print line + "\n"
client.print(line + "\n")
end
out.close
end
client.close
end
}
This simple proxy that I made parses the destination out of the HTTP request, then reads the HTTP response and performs logic based on special HTML tags. The proxy works for the most part, but seems to have trouble dealing with binary data and HTTPS connections.
How can I fix these problems?
First, you would probably be better off building on an existing Ruby HTTP proxy implementation. One such is already available in the Ruby standard library, namely WEBrick::HTTPProxyServer. See for example this related question for an implementation based on that same class: Webrick transparent proxy.
Regarding proxying HTTPS, you can't do much more than just pass the raw bytes. As HTTPS is cryptographically protected, you cannot inspect the contents at the HTTP protocol level. It is just an opaque stream of bytes.
WEBrick is blocking I/O ... This mean it does not able to stream the response. For example if you go on a youtube page to see a video, the stream will not be forwarded to your browser until the proxy have downloaded all the video cotent.
If you want the video be played in your browser during it download, you have to look for a non blocking I/O solution like EventMachine.
For HTTPS the solution is a little bit complicated since you have to develop a man in the middle proxy.
This was an old question, but for the sake of completeness here goes another answer.
I've implemented a HTTP/HTTPS interception proxy in Ruby, the project is hosted in github.
The HTTP case is obvious, HTTPS interception in accomplished via an HTTPS server that acts as a reverse proxy (and handles the TLS handshake). I.e.
Client(e.g. Browser) <--> Proxy1 <--> HTTPS Reverse Proxy <--> Target Server
As Valko mentioned, when a client connects to a HTTPS server through a proxy, you'll see a stream of encrypted bytes (since SSL provides end-to-end encryption). But not everything is encrypted, the proxy needs to know to whom the stream of bytes should be forwarded, so the client issues a CONNECT host:port request (being the body of the request the SSL stream).
The trick here is that the first proxy will forward this request to the HTTPS Reverse Proxy instead of the real target server. This reverse proxy will handle the SSL negotiation with the client, have access to the decrypted requests, and send copies (optionally altered versions) of these requests to the real target server by acting as a normal client. It will get the responses from the target server, (optionally) alter the responses, and send them back to the client.