Docker Notary no trust data available - json

I'm new to Docker Notary and require a server to be setup for my research work. The issue with this is I am using a self-signed certificate, already overwritten the default root-ca.crt, notary-signer.crt and notary-server.crt.
Openssl validates the certificate correctly, as can be seen from the output:
subject=/C=SG/ST=Some-State/O=<value>/OU=DCT/CN=<Amazon EC2 hostname>/emailAddress=<value>
issuer=/C=SG/ST=Some-State/O=<value>/OU=DCT/CN=<Amazon EC2 hostname>/emailAddress=<value>
No client certificate CA names sent
Peer signing digest: SHA384
Server Temp Key: ECDH, P-256, 256 bits
SSL handshake has read 2348 bytes and written 431 bytes
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 4096 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
Session-ID: 398FCDC32B644161D9243A25AA4E001408874E93247427609CD95E6EF8F83761
Session-ID-ctx:
Master-Key: 5427F024069D898563712EA826F2DF1582E8383F63FB13E9F6C6B6CAF1C4DC0A027942679426341F889F2E9DB0062C1D
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
TLS session ticket:
0000 - bc e4 fe 83 82 a1 1b 96-44 f7 1d 0c 9e 6f 45 8d ........D....oE.
0010 - 93 1e 5a c2 8c 9f 72 db-f6 45 4a 86 69 fe 30 20 ..Z...r..EJ.i.0
0020 - 98 9f 08 3d f5 bd ad d5-65 df 48 58 e4 6c f9 06 ...=....e.HX.l..
0030 - b6 28 e7 df 03 04 ac ad-ea 87 2c d8 db 64 73 44 .(........,..dsD
0040 - 0a b7 26 fe 2f a7 39 9c-5d 25 ca 21 68 76 37 26 ..&./.9.]%.!hv7&
0050 - 5e 0b d7 ea be 97 ea c8-16 b6 b0 04 30 13 0d 1e ^...........0...
0060 - 01 98 5e cf a1 58 61 df-30 14 d8 a6 f5 c0 7b 85 ..^..Xa.0.....{.
0070 - 11 cb 4c 73 93 e3 1e 53- ..Ls...S
Start Time: 1494736027
Timeout : 300 (sec)
Verify return code: 18 (self signed certificate)
HTTP/1.1 400 Bad Request
Content-Type: text/plain; charset=utf-8
Connection: close
400 Bad Requestclosed
I have also edited server-config.json to reflect the certificate algorithm to RSA and edited config.json in cmd/notary to point to my hostname:4443.
The issue is that after i rebuild docker notary and run the command
notary -s https://<server hostname>:4443 -d ~/.docker/trust list docker.io/library
I get this result
* fatal: no trust data available
I find it frustrating that for the last 2 days I have been pondering over this and the documentation is not very clear on how to do it either (probably because it is not even in stable yet).
Any help on this would be appreciated!

Related

ASC 122U NFC Reader compatible with EMV Contactless Cards

I am trying to read a EMV card using an APDU Command, however it seems the ACR122U external reader is blocking the APDU Command.
Select Command:
APDU-C -> 00 A4 04 00 0E 32 50 41 59 2E 53 59 53 2E 44 44 46 30 31 0E
APDU-R <- Error no response
Is it possible that the ACR122U reader is blocking the command ?
You want to SELECT FILE 1PAY.SYS.DDF01,
"Payment System Environment (PSE)"
To get the PSE directory and the card should response with Application Identifier (AID). but you set the LE=0E replace it to "00" .
Corrected APDU =>
PPSE = '00 a4 04 00 0e 32 50 41 59 2e 53 59 53 2e 44 44 46 30 31 00'
if The selection failed then the ADF doesn't exist (SW1/SW2=6A82)
if the selection is true then Application Identifier (AID) start command
Possible AID's:
A0000000031010
A0000000032020
A0000000041010
A0000000043060
AIDPrefix ='00 a4 04 00 07'

Perl-generated UTF8 strings in JSON corrupted on client side

I have a Perl CGI script that is accessing Thai language, UTF-8 strings from a PostgreSQL DB and returning them to a web-based front end as JSON. The strings are fine when I get them from the DB and after I encode them as JSON (based on writing to a log file). However, when the client receives them they are corrupted, for example:
featurename "à¹\u0082รà¸\u0087à¹\u0080รียà¸\u0099วัà¸\u0094ภาษี"
Clearly some chars are being converted to Unicode escape sequences, but not all.
I could really use some suggestions as to how to solve this.
Simplified code snippet follows. I am using 'utf8' and 'utf8::all', as well as 'JSON'.
Thanks in advance for any help you can provide.
my $dataId = $cgi->param('dataid');
my $table = "uploadpoints";
my $sqlcommand = "select id,featurename from $table where dataid=$dataId;";
my $stmt = $gDbh->prepare($sqlcommand);
my $numrows = $stmt->execute;
# print JSON header
print <<EOM;
Content-type: application/json; charset="UTF-8"
EOM
my #retarray;
for (my $i = 0; ($i < $numrows); $i=$i+1)
{
my $hashref = $stmt->fetchrow_hashref("NAME_lc");
#my $featurename = $hashref->{'featurename'};
#logentry("Point $i feature name is: $featurename\n");
push #retarray,$hashref;
}
my $json = encode_json (\#retarray);
logentry("JSON\n $json");
print $json;
I have modified and simplified the example, now running locally rather than via browser invocation:
my $dataId = 5;
my $table = "uploadpoints";
my $sqlcommand = "select id,featurename from $table where dataid=$dataId and id=75;";
my $stmt = $gDbh->prepare($sqlcommand);
my $numrows = $stmt->execute;
my #retarray;
for (my $i = 0; ($i < $numrows); $i=$i+1)
{
my $hashref = $stmt->fetchrow_hashref("NAME_lc");
my $featurename = $hashref->{'featurename'};
print "featurename $featurename\n";
push #retarray,$hashref;
}
my $json = encode_json (\#retarray);
print $json;
Using hexdump as in Stefan's example, I've determined that the data as read from the database are already in UTF-8. It looks as though they are being re-encoded in the JSON encode method. But why?
The data in the JSON use exactly twice as many bytes as the original UTF-8.
perl testcase.pl | hexdump -C
00000000 66 65 61 74 75 72 65 6e 61 6d 65 20 e0 b9 82 e0 |featurename ....|
00000010 b8 a3 e0 b8 87 e0 b9 80 e0 b8 a3 e0 b8 b5 e0 b8 |................|
00000020 a2 e0 b8 99 e0 b9 81 e0 b8 88 e0 b9 88 e0 b8 a1 |................|
00000030 e0 b8 88 e0 b8 b1 e0 b8 99 e0 b8 97 e0 b8 a3 e0 |................|
00000040 b9 8c 0a 5b 7b 22 66 65 61 74 75 72 65 6e 61 6d |...[{"featurenam|
00000050 65 22 3a 22 c3 a0 c2 b9 c2 82 c3 a0 c2 b8 c2 a3 |e":"............|
00000060 c3 a0 c2 b8 c2 87 c3 a0 c2 b9 c2 80 c3 a0 c2 b8 |................|
00000070 c2 a3 c3 a0 c2 b8 c2 b5 c3 a0 c2 b8 c2 a2 c3 a0 |................|
00000080 c2 b8 c2 99 c3 a0 c2 b9 c2 81 c3 a0 c2 b8 c2 88 |................|
00000090 c3 a0 c2 b9 c2 88 c3 a0 c2 b8 c2 a1 c3 a0 c2 b8 |................|
000000a0 c2 88 c3 a0 c2 b8 c2 b1 c3 a0 c2 b8 c2 99 c3 a0 |................|
000000b0 c2 b8 c2 97 c3 a0 c2 b8 c2 a3 c3 a0 c2 b9 c2 8c |................|
000000c0 22 2c 22 69 64 22 3a 37 35 7d 5d |","id":75}]|
000000cb
Further suggestions? I tried using decode on the UTF string but got errors related to wide characters.
I did read the recommended answer from Tom Christianson, as well as his Unicode tutorials, but I will admit much of it went over my head. Also it seems my problem is considerably more constrained.
I did wonder whether retrieving the hash value and assigning it to a normal variable was doing some sort of auto-decoding or encoding. I do not really understand when Perl uses its internal character format as opposed to when it retains the external encoding.
UPDATE WITH SOLUTION
Turns out that since the string retrieved from the DB is already in UTF-8, I need to use 'to_json' rather than 'encode_json'. This fixed the problem. Learned a lot about Perl Unicode handling in the process though...
Also recommend: http://perldoc.perl.org/perluniintro.html
Very clear exposition.
NOTE: you should probably also read this answer, which makes my answer sub-par in comparison :-)
The problem is that you have to be sure in which format each string is, otherwise you'll get incorrect conversions. When handling UTF-8 a string can be in two formats:
raw UTF-8 encoded octet string, i.e. \x{100} represented as two octets 0xC4 0x80
internal Perl string representation, i.e. one Unicode character \x{100} (U+0100 Ā LATIN CAPITAL LETTER A WITH MACRON)
If I/O is involved you also need to know if the I/O layer does UTF-8 de/encoding or not. For terminal I/O you also have to consider if it understands UTF-8 or not. Both taken together can make it difficult to get meaningful debug printouts from your code.
If you Perl code needs to process UTF-8 strings after reading them from the source, you must make sure that they are in internal Perl format. Otherwise you'll get surprising result when you call code that expects Perl strings and not raw octet strings.
I try to show this in my example code:
#!/usr/bin/perl
use warnings;
use strict;
use JSON;
open(my $utf8_stdout, '>& :encoding(UTF-8)', \*STDOUT)
or die "can't reopen STDOUT as utf-8 file handle: $!\n";
my $hex = "C480";
print "${hex}\n";
my $raw = pack('H*', $hex);
print STDOUT "${raw}\n";
print $utf8_stdout "${raw}\n";
my $decoded;
utf8::decode($decoded = $raw);
print STDOUT ord($decoded), "\n";
print STDOUT "${decoded}\n"; # Wide character in print at...
print $utf8_stdout "${decoded}\n";
my $json = JSON->new->encode([$decoded]);
print STDOUT "${json}\n"; # Wide character in print at...
print $utf8_stdout "${json}\n";
$json = JSON->new->utf8->encode([$decoded]);
print STDOUT "${json}\n";
print $utf8_stdout "${json}\n";
exit 0;
Copy & paste from my terminal (which supports UTF-8). Look closely at the differences between the lines:
$ perl dummy.pl
C480
Ā
Ä
256
Wide character in print at dummy.pl line 21.
Ā
Ā
Wide character in print at dummy.pl line 25.
["Ā"]
["Ā"]
["Ā"]
["Ä"]
But compare this to the following, where STDOUT is not a terminal, but piped to another program. The hex dump always shows "c4 80", i.e. UTF-8 encoded.
$ perl dummy.pl | hexdump -C
Wide character in print at dummy.pl line 21.
Wide character in print at dummy.pl line 22.
Wide character in print at dummy.pl line 25.
Wide character in print at dummy.pl line 26.
00000000 43 34 38 30 0a c4 80 0a c4 80 0a 5b 22 c4 80 22 |C480.......[".."|
00000010 5d 0a 5b 22 c4 80 22 5d 0a 43 34 38 30 0a c4 80 |].[".."].C480...|
00000020 0a 32 35 36 0a c4 80 0a 5b 22 c4 80 22 5d 0a 5b |.256....[".."].[|
00000030 22 c4 80 22 5d 0a |".."].|
00000036

entry0 meaning in radare2

I'm new to binary analysis. I am trying to analyse a simple program I compiled from my C code via gcc.
I followed these steps:
1. aaa
2. afl
and I got this output:
0x00000608 3 23 sym._init
0x00000630 1 8 sym.imp.puts
0x00000638 1 8 sym.imp._IO_getc
0x00000640 1 8 sym.imp.__printf_chk
0x00000648 1 8 sym.imp.__cxa_finalize
0x00000650 4 77 sym.main
0x000006a0 1 43 entry0
0x000006d0 4 50 -> 44 sym.deregister_tm_clones
0x00000710 4 66 -> 57 sym.register_tm_clones
0x00000760 5 50 sym.__do_global_dtors_aux
0x000007a0 4 48 -> 42 sym.frame_dummy
0x000007d0 1 24 sym.smth
0x000007f0 4 101 sym.__libc_csu_init
0x00000860 1 2 sym.__libc_csu_fini
0x00000864 1 9 sym._fini
I can get main is the main starting point of the program but I'm worried about what entry0 is. Apparently from what I saw is not a symbol. I tried to run ag # entry0 and ag # main and the graphs I saw were very different. By looking at the disassembled code I see this for entry0:
I'm supposing this might be a kind of ELF template function to load the binary and run it from main. What is entry0 really?
Sorry for keeping it so long. Thanks in advance.
You should post RE questions on https://reverseengineering.stackexchange.com/.
entry0 is an alias for the _start symbol, which corresponds to the _start function.
The memory address of _start is the program entry point, where control is passed from the loader to the program.
The _start function originates from a relocatable ELF object file called crt1.o that is linked into binaries that require the C runtime environment.
$ objdump -dj .text /usr/lib/x86_64-linux-gnu/crt1.o
/usr/lib/x86_64-linux-gnu/crt1.o: file format elf64-x86-64
Disassembly of section .text:
0000000000000000 <_start>:
0: 31 ed xor %ebp,%ebp
2: 49 89 d1 mov %rdx,%r9
5: 5e pop %rsi
6: 48 89 e2 mov %rsp,%rdx
9: 48 83 e4 f0 and $0xfffffffffffffff0,%rsp
d: 50 push %rax
e: 54 push %rsp
f: 49 c7 c0 00 00 00 00 mov $0x0,%r8
16: 48 c7 c1 00 00 00 00 mov $0x0,%rcx
1d: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
24: e8 00 00 00 00 callq 29 <_start+0x29>
29: f4 hlt
With /bin/cat as an example:
$ readelf -h /bin/cat
ELF Header:
Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00
Class: ELF64
Data: 2's complement, little endian
Version: 1 (current)
OS/ABI: UNIX - System V
ABI Version: 0
Type: EXEC (Executable file)
Machine: Advanced Micro Devices X86-64
Version: 0x1
Entry point address: 0x402602 <-----
Start of program headers: 64 (bytes into file)
Start of section headers: 46112 (bytes into file)
Flags: 0x0
Size of this header: 64 (bytes)
Size of program headers: 56 (bytes)
Number of program headers: 9
Size of section headers: 64 (bytes)
Number of section headers: 28
Section header string table index: 27
The memory address of the entry point is 0x402602.
402602: 31 ed xor %ebp,%ebp
402604: 49 89 d1 mov %rdx,%r9
402607: 5e pop %rsi
402608: 48 89 e2 mov %rsp,%rdx
40260b: 48 83 e4 f0 and $0xfffffffffffffff0,%rsp
40260f: 50 push %rax
402610: 54 push %rsp
402611: 49 c7 c0 60 89 40 00 mov $0x408960,%r8
402618: 48 c7 c1 f0 88 40 00 mov $0x4088f0,%rcx
40261f: 48 c7 c7 40 1a 40 00 mov $0x401a40,%rdi
402626: e8 d5 f1 ff ff callq 401800 <__libc_start_main#plt>
40262b: f4 hlt
Recommended reading:
Linux x86 Program Start Up or - How the heck do we get to main()?
What is the use of _start() in C?
Generic System V ABI

CompileError: WebAssembly.compile()

I am trying to execute basic C code of calculating factorial in WebAssembly and when I am loading WASM file in Google Chrome ( 57.0.2987.98) I am getting
CompileError: WebAssembly.compile():
Wasm decoding failedResult = expected magic word 00 61 73 6d,
found 30 30 36 31 #+0`
C Code :
double fact(int i) {
long long n = 1;
for (;i > 0; i--) {
n *= i;
}
return (double)n;
}
WAST :
(module
(table 0 anyfunc)
(memory $0 1)
(export "memory" (memory $0))
(export "_Z4facti" (func $_Z4facti))
(func $_Z4facti (param $0 i32) (result f64)
(local $1 i64)
(local $2 i64)
(block $label$0
(br_if $label$0
(i32.lt_s
(get_local $0)
(i32.const 1)
)
)
(set_local $1
(i64.add
(i64.extend_s/i32
(get_local $0)
)
(i64.const 1)
)
)
(set_local $2
(i64.const 1)
)
(loop $label$1
(set_local $2
(i64.mul
(get_local $2)
(tee_local $1
(i64.add
(get_local $1)
(i64.const -1)
)
)
)
)
(br_if $label$1
(i64.gt_s
(get_local $1)
(i64.const 1)
)
)
)
(return
(f64.convert_s/i64
(get_local $2)
)
)
)
(f64.const 1)
)
)
WASM Compiled Code :
0061 736d 0100 0000 0186 8080 8000 0160
017f 017c 0382 8080 8000 0100 0484 8080
8000 0170 0000 0583 8080 8000 0100 0106
8180 8080 0000 0795 8080 8000 0206 6d65
6d6f 7279 0200 085f 5a34 6661 6374 6900
000a c380 8080 0001 bd80 8080 0001 027e
0240 2000 4101 480d 0020 00ac 4201 7c21
0142 0121 0203 4020 0220 0142 7f7c 2201
7e21 0220 0142 0155 0d00 0b20 02b9 0f0b
4400 0000 0000 00f0 3f0b
`
Code executed in Chrome :
async function load(){
let binary = await fetch('https://flinkhub.com/t.wasm');
let bytes = await binary.arrayBuffer();
console.log(bytes);
let module = await WebAssembly.compile(bytes);
let instance = await WebAssembly.Instance(module);
}
load().then(instance => console.log(instance.exports.fact(3)));
Can anyone help me out, I have been stuck on this for a whole day and not able to understand what is going wrong.
I used WebAssembly Explorer to to get the WAST and WASM code.
Using the WebAssembly Explorer's download capability you reference, I get the following file (as seen with hexdump):
0000000 00 61 73 6d 01 00 00 00 01 86 80 80 80 00 01 60
0000010 01 7f 01 7c 03 82 80 80 80 00 01 00 04 84 80 80
0000020 80 00 01 70 00 00 05 83 80 80 80 00 01 00 01 06
0000030 81 80 80 80 00 00 07 95 80 80 80 00 02 06 6d 65
0000040 6d 6f 72 79 02 00 08 5f 5a 34 66 61 63 74 69 00
0000050 00 0a c3 80 80 80 00 01 bd 80 80 80 00 01 02 7e
0000060 02 40 20 00 41 01 48 0d 00 20 00 ac 42 01 7c 21
0000070 01 42 01 21 02 03 40 20 02 20 01 42 7f 7c 22 01
0000080 7e 21 02 20 01 42 01 55 0d 00 0b 20 02 b9 0f 0b
0000090 44 00 00 00 00 00 00 f0 3f 0b
000009a
That's a valid .wasm binary which starts with the magic 00 61 73 6d a.k.a. \0asm. According to the error message you get, your file starts with 30 30 36 31 which isn't valid.
Double-check the .wasm file you have.
Decoding 30 30 36 31 as ASCII gives 0061 which seems to be your problem: you're loading the textual version of your hex file. Sure enough, the URL you provide (https://flinkhub.com/t.wasm) contains the following content as-is (I didn't hexdump it! It's ASCII):
0061 736d 0100 0000 0186 8080 8000 0160
017f 017c 0382 8080 8000 0100 0484 8080
8000 0170 0000 0583 8080 8000 0100 0106
8180 8080 0000 0795 8080 8000 0206 6d65
6d6f 7279 0200 085f 5a34 6661 6374 6900
000a c380 8080 0001 bd80 8080 0001 027e
0240 2000 4101 480d 0020 00ac 4201 7c21
0142 0121 0203 4020 0220 0142 7f7c 2201
7e21 0220 0142 0155 0d00 0b20 02b9 0f0b
4400 0000 0000 00f0 3f0b
I'm guessing you overwrote the file saved from the Explorer.
I solved this by setting GOARCH and GOOS environment variable correctly in zsh shell of mac before generating wasm object. Looks like go compiler does not recognize both these variables unless you explosively export them as a global variable in the parent shell. I simply exported both variables and run the compiler.
% export GOARCH=wasm
% export GOOS=js
% go build -o hello.wasm hello.go
I am not sure about your system, but I am using react and esbuild.wasm where while running the below code I was getting error.
const service = await esbuild.startService({
worker: true,
wasmURL: "/esbuild.wasm/esbuild.wasm"
})
esbuild.wasm is in my public folder along with node_module.
So I corrected the url which is wasmURL: "https://unpkg.com/esbuild-wasm#0.8.27/esbuild.wasm" and now it is working.
30 30 36 31 is the hex dump of the string "0061" which is the start of the hex dump of your wasm binary. Did you somehow fetch the textual hexdump instead of the actual binary?
In general, this error means that instead of the binary .wasm file your browser received something else. That can be a error page generated by the web server or the same .wasm but corrupted somehow. I recommend to open your browser Developer Tools, go to Network tab, Refresh the page and look at the .wasm resource request and response.

How can I implement server side SMTP STARTTLS?

I am trying to implement a simple SMTP server using Vala and GLib + GIO.
Plain text communication is no problem so far, but when it comes to TLS using STARTTLS things get harder.
This is the code I have so far:
const string appname = "vsmtpd";
const string hostname = "myserver";
const uint16 listenport = 10025;
const string keyfile = "vsmtpd.key";
const string certfile = "vsmtpd.crt";
// TODO: Parse EHLO instead of constant string
const string username = "myclient";
void process_request_plain (InputStream input, OutputStream output) throws Error {
output.write (#"220 $hostname ESMTP $appname\n".data);
var data_in = new DataInputStream (input);
string line;
while ((line = data_in.read_line (null)) != null) {
stdout.printf ("%s\n", line);
line = line.chomp ();
if (line.substring (0, 5) == "EHLO ") {
output.write (#"250-$hostname Hello $username\n".data);
output.write ("250 STARTTLS\n".data);
}
else if (line == "STARTTLS") {
output.write ("220 Go ahead\n".data);
break;
}
else {
output.write ("502 Command not implemented\n".data);
}
}
}
int main () {
try {
TlsCertificate cert = new TlsCertificate.from_files
(certfile, keyfile);
var service = new SocketService ();
service.add_inet_port (listenport, null);
service.start ();
while (true) {
SocketConnection conn = service.accept (null);
process_request_plain (conn.input_stream, conn.output_stream);
TlsServerConnection tlsconn = TlsServerConnection.#new (conn, cert);
assert_nonnull (tlsconn);
// TODO: Is this neccessary?
tlsconn.accept_certificate.connect ((peer_cert, errors) => {
stdout.printf ("TLS accepting peer cert\n");
return true;
});
try {
tlsconn.handshake ();
stdout.printf ("TLS handshake ok\n");
} catch (Error e) {
stdout.printf ("TLS handshake failed\n");
stderr.printf ("%s\n", e.message);
}
}
} catch (Error e) {
stderr.printf ("%s\n", e.message);
}
return 0;
}
Given a valid SSL certificate in vsmtpd.key and vsmtpd.crt (which I generated with openssl req -x509 -newkey rsa:2048 -keyout vsmtpd.key -out vsmtpd.pem -days 365 -nodes) I start the program and I also run this OpenSSL command to test STARTTLS:
openssl s_client -connect localhost:10025 -starttls smtp -debug
The output from my program is:
EHLO openssl.client.net
STARTTLS
TLS handshake failed
Stream is already closed
The output from OpenSSL is:
CONNECTED(00000003)
read from 0x6ae470 [0x6af050] (4096 bytes => 26 (0x1A))
0000 - 32 32 30 20 6d 79 73 65-72 76 65 72 20 45 53 4d 220 myserver ESM
0010 - 54 50 20 76 73 6d 74 70-64 0a TP vsmtpd.
write to 0x6ae470 [0x6b0060] (25 bytes => 25 (0x19))
0000 - 45 48 4c 4f 20 6f 70 65-6e 73 73 6c 2e 63 6c 69 EHLO openssl.cli
0010 - 65 6e 74 2e 6e 65 74 0d-0a ent.net..
read from 0x6ae470 [0x6af050] (4096 bytes => 28 (0x1C))
0000 - 32 35 30 2d 6d 79 73 65-72 76 65 72 20 48 65 6c 250-myserver Hel
0010 - 6c 6f 20 6d 79 63 6c 69-65 6e 74 0a lo myclient.
read from 0x6ae470 [0x6af050] (4096 bytes => 13 (0xD))
0000 - 32 35 30 20 53 54 41 52-54 54 4c 53 0a 250 STARTTLS.
write to 0x6ae470 [0x7ffdb4aea9e0] (10 bytes => 10 (0xA))
0000 - 53 54 41 52 54 54 4c 53-0d 0a STARTTLS..
read from 0x6ae470 [0x6a13a0] (8192 bytes => 13 (0xD))
0000 - 32 32 30 20 47 6f 20 61-68 65 61 64 0a 220 Go ahead.
write to 0x6ae470 [0x6aefa0] (204 bytes => 204 (0xCC))
0000 - 16 03 01 00 c7 01 00 00-c3 03 03 0e ac 05 35 45 ..............5E
0010 - db 95 f6 a7 37 55 d8 ca-14 d7 5f 8e 6a 62 08 50 ....7U...._.jb.P
0020 - c9 81 b7 55 75 a8 4c 17-c0 a1 53 00 00 76 00 a5 ...Uu.L...S..v..
0030 - 00 a3 00 a1 00 9f 00 6b-00 6a 00 69 00 68 00 39 .......k.j.i.h.9
0040 - 00 38 00 37 00 36 00 88-00 87 00 86 00 85 00 9d .8.7.6..........
0050 - 00 3d 00 35 00 84 00 a4-00 a2 00 a0 00 9e 00 67 .=.5...........g
0060 - 00 40 00 3f 00 3e 00 33-00 32 00 31 00 30 00 9a .#.?.>.3.2.1.0..
0070 - 00 99 00 98 00 97 00 45-00 44 00 43 00 42 00 9c .......E.D.C.B..
0080 - 00 3c 00 2f 00 96 00 41-00 07 00 05 00 04 00 16 .<./...A........
0090 - 00 13 00 10 00 0d 00 0a-00 15 00 12 00 0f 00 0c ................
00a0 - 00 09 00 ff 02 01 00 00-23 00 23 00 00 00 0d 00 ........#.#.....
00b0 - 16 00 14 06 01 06 02 05-01 05 02 04 01 04 02 03 ................
00c0 - 01 03 02 02 01 02 02 00-0f 00 01 01 ............
read from 0x6ae470 [0x6b4500] (7 bytes => -1 (0xFFFFFFFFFFFFFFFF))
write:errno=104
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 80 bytes and written 239 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
---
What I understand from the output my program closes the connection before the TLS handshake can complete. (I also tried using Thunderbird and Claws Mail)
What am I doing wrong here?
PS: I couldn't find any example on how to use GTLsServerConnection in a STARTTLS situation.
Update:
I tried -ssl2, -ssl3, -tls1, -tls1_1, -tls1_2 options of OpenSSL which also don't work.
openssl s_client -connect localhost:10025 -starttls smtp -state
yields:
CONNECTED(00000003)
SSL_connect:before/connect initialization
SSL_connect:SSLv2/v3 write client hello A
SSL_connect:error in SSLv2/v3 read server hello A
write:errno=104
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 100 bytes and written 239 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
---
So the client sends "client hello A", but the server doesn't send a correct "server hello A".
As an alternative you can also try gnutls-cli --crlf --starttls-proto=smtp --port 10025 localhost.
The output from GNUTLS_DEBUG_LEVEL=11 ./vsmtpd is:
gnutls[2]: Enabled GnuTLS logging...
gnutls[2]: Intel SSSE3 was detected
gnutls[2]: Intel AES accelerator was detected
gnutls[2]: Intel GCM accelerator was detected
gnutls[2]: Enabled GnuTLS logging...
gnutls[2]: Intel SSSE3 was detected
gnutls[2]: Intel AES accelerator was detected
gnutls[2]: Intel GCM accelerator was detected
gnutls[3]: ASSERT: x509_b64.c:299
gnutls[9]: Could not find '-----BEGIN RSA PRIVATE KEY'
gnutls[3]: ASSERT: x509_b64.c:299
gnutls[9]: Could not find '-----BEGIN DSA PRIVATE KEY'
gnutls[3]: ASSERT: x509_b64.c:299
gnutls[9]: Could not find '-----BEGIN EC PRIVATE KEY'
gnutls[3]: ASSERT: privkey.c:503
gnutls[2]: Falling back to PKCS #8 key decoding
EHLO openssl.client.net
STARTTLS
gnutls[5]: REC[0xfa67e0]: Allocating epoch #0
gnutls[3]: ASSERT: gnutls_constate.c:586
gnutls[5]: REC[0xfa67e0]: Allocating epoch #1
gnutls[3]: ASSERT: gnutls_buffers.c:1138
gnutls[10]: READ: -1 returned from 0xfa4120, errno=0 gerrno=5
gnutls[3]: ASSERT: gnutls_buffers.c:364
gnutls[3]: ASSERT: gnutls_buffers.c:572
gnutls[3]: ASSERT: gnutls_record.c:1058
gnutls[3]: ASSERT: gnutls_record.c:1179
gnutls[3]: ASSERT: gnutls_buffers.c:1392
gnutls[3]: ASSERT: gnutls_handshake.c:1428
gnutls[3]: ASSERT: gnutls_handshake.c:3098
gnutls[3]: ASSERT: gnutls_db.c:334
TLS handshake failed
Stream is already closed
gnutls[5]: REC[0xfa67e0]: Start of epoch cleanup
gnutls[5]: REC[0xfa67e0]: End of epoch cleanup
gnutls[5]: REC[0xfa67e0]: Epoch #0 freed
gnutls[5]: REC[0xfa67e0]: Epoch #1 freed
The problem is hidden somewhere in the implementation of DataInputStream.
Once I removed it and used the following replacement for read_line () instead, it works just fine.
string? read_line (InputStream input) throws Error {
var buffer = new uint8[1];
var sb = new StringBuilder ();
buffer[0] = '\0';
while (buffer[0] != '\n') {
input.read (buffer);
sb.append_c ((char) buffer[0]);
}
return (string) sb.data;
}
void process_request_plain (InputStream input, OutputStream output) throws Error {
output.write (#"220 $hostname ESMTP $appname\n".data);
string line;
while ((line = read_line (input)) != null) {
stdout.printf ("%s\n", line);
line = line.chomp ();
if (line.substring (0, 5) == "EHLO ") {
output.write (#"250-$hostname Hello $username\n".data);
output.write ("250 STARTTLS\n".data);
}
else if (line == "STARTTLS") {
output.write ("220 Go ahead\n".data);
break;
}
else {
output.write ("502 Command not implemented\n".data);
}
}
}