How to interpret KVM emulation error - qemu

I am running qemu and I find that I have an emulation error which only occurs when I run using KVM. Running without KVM I do not see this error. I have tried different CPUs for qemu as the exception in the kernel indicates that it is an instruction decoding error but this doens't help. What does this error mean and what do I need to debug it (eg. symbols and vm mappings in the guest).
KVM internal error. Suberror: 1
emulation failure
RAX=0000000000000001 RBX=ffff8b00f1820b10 RCX=0000000000000000 RDX=0000000000000001
RSI=0000000000000001 RDI=ffff8b00f18a15ba RBP=ffffe58890ee94d0 RSP=ffff8b00f1820a10
R8 =0000000000000002 R9 =fffff80a939b2048 R10=fffff80a93b66380 R11=fffff80a933e0000
R12=0000000000000000 R13=0000000000000002 R14=ffffe5888ef217b8 R15=ffffe5888ef21403
RIP=fffff80a935f0031 RFL=00010293 [--S-A-C] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =002b 0000000000000000 ffffffff 00c0f300 DPL=3 DS [-WA]
CS =0010 0000000000000000 00000000 00209b00 DPL=0 CS64 [-RA]
SS =0018 0000000000000000 00000000 00409300 DPL=0 DS [-WA]
DS =002b 0000000000000000 ffffffff 00c0f300 DPL=3 DS [-WA]
FS =0053 00000000a5c9e000 00003c00 0040f300 DPL=3 DS [-WA]
GS =002b fffff8018c9c0000 ffffffff 00c0f300 DPL=3 DS [-WA]
LDT=0000 0000000000000000 ffffffff 00c00000
TR =0040 fffff8018e44e070 00000067 00008b00 DPL=0 TSS64-busy
GDT= fffff8018e44d000 0000006f
IDT= fffff8018e44d070 00000fff
CR0=80050033 CR2=ffffa80413ca5000 CR3=0000000108a69000 CR4=001506f8
DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000
DR6=00000000ffff0ff0 DR7=0000000000000400
EFER=0000000000000d01
Code=ff 0b 75 10 f7 85 64 02 00 00 00 00 02 00 0f 84 e9 04 00 00 <0f> 10 07 0f b6 c1 48 6b c8 26 0f 11 44 19 01 0f 10 4f 10 0f 11 4c 19 11 8b 47 20 89 44 19

From http://www.linux-kvm.org/page/Tracing you need to enable kernel tracing and forward this to the list for assistance.

Related

ASC 122U NFC Reader compatible with EMV Contactless Cards

I am trying to read a EMV card using an APDU Command, however it seems the ACR122U external reader is blocking the APDU Command.
Select Command:
APDU-C -> 00 A4 04 00 0E 32 50 41 59 2E 53 59 53 2E 44 44 46 30 31 0E
APDU-R <- Error no response
Is it possible that the ACR122U reader is blocking the command ?
You want to SELECT FILE 1PAY.SYS.DDF01,
"Payment System Environment (PSE)"
To get the PSE directory and the card should response with Application Identifier (AID). but you set the LE=0E replace it to "00" .
Corrected APDU =>
PPSE = '00 a4 04 00 0e 32 50 41 59 2e 53 59 53 2e 44 44 46 30 31 00'
if The selection failed then the ADF doesn't exist (SW1/SW2=6A82)
if the selection is true then Application Identifier (AID) start command
Possible AID's:
A0000000031010
A0000000032020
A0000000041010
A0000000043060
AIDPrefix ='00 a4 04 00 07'

How exactly are Destructor calls made

I wonder how exactly a constructor or destructor is being called for eg in c++? Im especially interested in the OS point of view. Im also interested in the case where we run android app written in java and we want to get info about user session. Can we use conatructor to set time of beginning of session and destr to set time of ending the session and save the data in database? Does actually OS handle destructors calls or something else? Thanks in advance!
I'm not familiar with how Java handles constructor and destructor (Java involves virtual machine layer), but I'll try to answer this from a cpp point of view.
The short answer to your question: OS does not participate in constructor or destructor (unless there's heap allocation, system call...). Compiler will insert calls to constructor and destructor in the right place when it generates machine code.
For a simple program as follows:
class A{
int* i;
public:
A() { i = new int; }
~A() { delete i; }
};
int main() {
A a;
}
Let's examine the assembly code emitted by compiler using objdump:
00000000004006a6 <main>:
4006a6: 55 push %rbp
4006a7: 48 89 e5 mov %rsp,%rbp
4006aa: 48 83 ec 10 sub $0x10,%rsp
4006ae: 64 48 8b 04 25 28 00 mov %fs:0x28,%rax
4006b5: 00 00
4006b7: 48 89 45 f8 mov %rax,-0x8(%rbp)
4006bb: 31 c0 xor %eax,%eax
4006bd: 48 8d 45 f0 lea -0x10(%rbp),%rax
4006c1: 48 89 c7 mov %rax,%rdi
4006c4: e8 27 00 00 00 callq 4006f0 <_ZN1AC1Ev>
4006c9: 48 8d 45 f0 lea -0x10(%rbp),%rax
4006cd: 48 89 c7 mov %rax,%rdi
4006d0: e8 3f 00 00 00 callq 400714 <_ZN1AD1Ev>
4006d5: b8 00 00 00 00 mov $0x0,%eax
4006da: 48 8b 55 f8 mov -0x8(%rbp),%rdx
4006de: 64 48 33 14 25 28 00 xor %fs:0x28,%rdx
4006e5: 00 00
4006e7: 74 05 je 4006ee <main+0x48>
4006e9: e8 92 fe ff ff callq 400580 <__stack_chk_fail#plt>
4006ee: c9 leaveq
4006ef: c3 retq
Note that depending on the underlying architecture and compiler, your output might not be the same as mine, but the structure should generally be the same.
You can see compiler automatically generates calls to constructor callq 400714 <_ZN1AD1Ev> and destructor callq 400714 <_ZN1AD1Ev>. The assembly code for constructor is:
00000000004006f0 <_ZN1AC1Ev>:
4006f0: 55 push %rbp
4006f1: 48 89 e5 mov %rsp,%rbp
4006f4: 48 83 ec 10 sub $0x10,%rsp
4006f8: 48 89 7d f8 mov %rdi,-0x8(%rbp)
4006fc: bf 04 00 00 00 mov $0x4,%edi
400701: e8 8a fe ff ff callq 400590 <_Znwm#plt>
400706: 48 89 c2 mov %rax,%rdx
400709: 48 8b 45 f8 mov -0x8(%rbp),%rax
40070d: 48 89 10 mov %rdx,(%rax)
400710: 90 nop
400711: c9 leaveq
400712: c3 retq
400713: 90 nop
Assembly for destructor:
0000000000400714 <_ZN1AD1Ev>:
400714: 55 push %rbp
400715: 48 89 e5 mov %rsp,%rbp
400718: 48 83 ec 10 sub $0x10,%rsp
40071c: 48 89 7d f8 mov %rdi,-0x8(%rbp)
400720: 48 8b 45 f8 mov -0x8(%rbp),%rax
400724: 48 8b 00 mov (%rax),%rax
400727: 48 89 c7 mov %rax,%rdi
40072a: e8 31 fe ff ff callq 400560 <_ZdlPv#plt>
40072f: 90 nop
400730: c9 leaveq
400731: c3 retq
400732: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
400739: 00 00 00
40073c: 0f 1f 40 00 nopl 0x0(%rax)

entry0 meaning in radare2

I'm new to binary analysis. I am trying to analyse a simple program I compiled from my C code via gcc.
I followed these steps:
1. aaa
2. afl
and I got this output:
0x00000608 3 23 sym._init
0x00000630 1 8 sym.imp.puts
0x00000638 1 8 sym.imp._IO_getc
0x00000640 1 8 sym.imp.__printf_chk
0x00000648 1 8 sym.imp.__cxa_finalize
0x00000650 4 77 sym.main
0x000006a0 1 43 entry0
0x000006d0 4 50 -> 44 sym.deregister_tm_clones
0x00000710 4 66 -> 57 sym.register_tm_clones
0x00000760 5 50 sym.__do_global_dtors_aux
0x000007a0 4 48 -> 42 sym.frame_dummy
0x000007d0 1 24 sym.smth
0x000007f0 4 101 sym.__libc_csu_init
0x00000860 1 2 sym.__libc_csu_fini
0x00000864 1 9 sym._fini
I can get main is the main starting point of the program but I'm worried about what entry0 is. Apparently from what I saw is not a symbol. I tried to run ag # entry0 and ag # main and the graphs I saw were very different. By looking at the disassembled code I see this for entry0:
I'm supposing this might be a kind of ELF template function to load the binary and run it from main. What is entry0 really?
Sorry for keeping it so long. Thanks in advance.
You should post RE questions on https://reverseengineering.stackexchange.com/.
entry0 is an alias for the _start symbol, which corresponds to the _start function.
The memory address of _start is the program entry point, where control is passed from the loader to the program.
The _start function originates from a relocatable ELF object file called crt1.o that is linked into binaries that require the C runtime environment.
$ objdump -dj .text /usr/lib/x86_64-linux-gnu/crt1.o
/usr/lib/x86_64-linux-gnu/crt1.o: file format elf64-x86-64
Disassembly of section .text:
0000000000000000 <_start>:
0: 31 ed xor %ebp,%ebp
2: 49 89 d1 mov %rdx,%r9
5: 5e pop %rsi
6: 48 89 e2 mov %rsp,%rdx
9: 48 83 e4 f0 and $0xfffffffffffffff0,%rsp
d: 50 push %rax
e: 54 push %rsp
f: 49 c7 c0 00 00 00 00 mov $0x0,%r8
16: 48 c7 c1 00 00 00 00 mov $0x0,%rcx
1d: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
24: e8 00 00 00 00 callq 29 <_start+0x29>
29: f4 hlt
With /bin/cat as an example:
$ readelf -h /bin/cat
ELF Header:
Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00
Class: ELF64
Data: 2's complement, little endian
Version: 1 (current)
OS/ABI: UNIX - System V
ABI Version: 0
Type: EXEC (Executable file)
Machine: Advanced Micro Devices X86-64
Version: 0x1
Entry point address: 0x402602 <-----
Start of program headers: 64 (bytes into file)
Start of section headers: 46112 (bytes into file)
Flags: 0x0
Size of this header: 64 (bytes)
Size of program headers: 56 (bytes)
Number of program headers: 9
Size of section headers: 64 (bytes)
Number of section headers: 28
Section header string table index: 27
The memory address of the entry point is 0x402602.
402602: 31 ed xor %ebp,%ebp
402604: 49 89 d1 mov %rdx,%r9
402607: 5e pop %rsi
402608: 48 89 e2 mov %rsp,%rdx
40260b: 48 83 e4 f0 and $0xfffffffffffffff0,%rsp
40260f: 50 push %rax
402610: 54 push %rsp
402611: 49 c7 c0 60 89 40 00 mov $0x408960,%r8
402618: 48 c7 c1 f0 88 40 00 mov $0x4088f0,%rcx
40261f: 48 c7 c7 40 1a 40 00 mov $0x401a40,%rdi
402626: e8 d5 f1 ff ff callq 401800 <__libc_start_main#plt>
40262b: f4 hlt
Recommended reading:
Linux x86 Program Start Up or - How the heck do we get to main()?
What is the use of _start() in C?
Generic System V ABI

Docker Notary no trust data available

I'm new to Docker Notary and require a server to be setup for my research work. The issue with this is I am using a self-signed certificate, already overwritten the default root-ca.crt, notary-signer.crt and notary-server.crt.
Openssl validates the certificate correctly, as can be seen from the output:
subject=/C=SG/ST=Some-State/O=<value>/OU=DCT/CN=<Amazon EC2 hostname>/emailAddress=<value>
issuer=/C=SG/ST=Some-State/O=<value>/OU=DCT/CN=<Amazon EC2 hostname>/emailAddress=<value>
No client certificate CA names sent
Peer signing digest: SHA384
Server Temp Key: ECDH, P-256, 256 bits
SSL handshake has read 2348 bytes and written 431 bytes
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 4096 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
Session-ID: 398FCDC32B644161D9243A25AA4E001408874E93247427609CD95E6EF8F83761
Session-ID-ctx:
Master-Key: 5427F024069D898563712EA826F2DF1582E8383F63FB13E9F6C6B6CAF1C4DC0A027942679426341F889F2E9DB0062C1D
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
TLS session ticket:
0000 - bc e4 fe 83 82 a1 1b 96-44 f7 1d 0c 9e 6f 45 8d ........D....oE.
0010 - 93 1e 5a c2 8c 9f 72 db-f6 45 4a 86 69 fe 30 20 ..Z...r..EJ.i.0
0020 - 98 9f 08 3d f5 bd ad d5-65 df 48 58 e4 6c f9 06 ...=....e.HX.l..
0030 - b6 28 e7 df 03 04 ac ad-ea 87 2c d8 db 64 73 44 .(........,..dsD
0040 - 0a b7 26 fe 2f a7 39 9c-5d 25 ca 21 68 76 37 26 ..&./.9.]%.!hv7&
0050 - 5e 0b d7 ea be 97 ea c8-16 b6 b0 04 30 13 0d 1e ^...........0...
0060 - 01 98 5e cf a1 58 61 df-30 14 d8 a6 f5 c0 7b 85 ..^..Xa.0.....{.
0070 - 11 cb 4c 73 93 e3 1e 53- ..Ls...S
Start Time: 1494736027
Timeout : 300 (sec)
Verify return code: 18 (self signed certificate)
HTTP/1.1 400 Bad Request
Content-Type: text/plain; charset=utf-8
Connection: close
400 Bad Requestclosed
I have also edited server-config.json to reflect the certificate algorithm to RSA and edited config.json in cmd/notary to point to my hostname:4443.
The issue is that after i rebuild docker notary and run the command
notary -s https://<server hostname>:4443 -d ~/.docker/trust list docker.io/library
I get this result
* fatal: no trust data available
I find it frustrating that for the last 2 days I have been pondering over this and the documentation is not very clear on how to do it either (probably because it is not even in stable yet).
Any help on this would be appreciated!

How can I implement server side SMTP STARTTLS?

I am trying to implement a simple SMTP server using Vala and GLib + GIO.
Plain text communication is no problem so far, but when it comes to TLS using STARTTLS things get harder.
This is the code I have so far:
const string appname = "vsmtpd";
const string hostname = "myserver";
const uint16 listenport = 10025;
const string keyfile = "vsmtpd.key";
const string certfile = "vsmtpd.crt";
// TODO: Parse EHLO instead of constant string
const string username = "myclient";
void process_request_plain (InputStream input, OutputStream output) throws Error {
output.write (#"220 $hostname ESMTP $appname\n".data);
var data_in = new DataInputStream (input);
string line;
while ((line = data_in.read_line (null)) != null) {
stdout.printf ("%s\n", line);
line = line.chomp ();
if (line.substring (0, 5) == "EHLO ") {
output.write (#"250-$hostname Hello $username\n".data);
output.write ("250 STARTTLS\n".data);
}
else if (line == "STARTTLS") {
output.write ("220 Go ahead\n".data);
break;
}
else {
output.write ("502 Command not implemented\n".data);
}
}
}
int main () {
try {
TlsCertificate cert = new TlsCertificate.from_files
(certfile, keyfile);
var service = new SocketService ();
service.add_inet_port (listenport, null);
service.start ();
while (true) {
SocketConnection conn = service.accept (null);
process_request_plain (conn.input_stream, conn.output_stream);
TlsServerConnection tlsconn = TlsServerConnection.#new (conn, cert);
assert_nonnull (tlsconn);
// TODO: Is this neccessary?
tlsconn.accept_certificate.connect ((peer_cert, errors) => {
stdout.printf ("TLS accepting peer cert\n");
return true;
});
try {
tlsconn.handshake ();
stdout.printf ("TLS handshake ok\n");
} catch (Error e) {
stdout.printf ("TLS handshake failed\n");
stderr.printf ("%s\n", e.message);
}
}
} catch (Error e) {
stderr.printf ("%s\n", e.message);
}
return 0;
}
Given a valid SSL certificate in vsmtpd.key and vsmtpd.crt (which I generated with openssl req -x509 -newkey rsa:2048 -keyout vsmtpd.key -out vsmtpd.pem -days 365 -nodes) I start the program and I also run this OpenSSL command to test STARTTLS:
openssl s_client -connect localhost:10025 -starttls smtp -debug
The output from my program is:
EHLO openssl.client.net
STARTTLS
TLS handshake failed
Stream is already closed
The output from OpenSSL is:
CONNECTED(00000003)
read from 0x6ae470 [0x6af050] (4096 bytes => 26 (0x1A))
0000 - 32 32 30 20 6d 79 73 65-72 76 65 72 20 45 53 4d 220 myserver ESM
0010 - 54 50 20 76 73 6d 74 70-64 0a TP vsmtpd.
write to 0x6ae470 [0x6b0060] (25 bytes => 25 (0x19))
0000 - 45 48 4c 4f 20 6f 70 65-6e 73 73 6c 2e 63 6c 69 EHLO openssl.cli
0010 - 65 6e 74 2e 6e 65 74 0d-0a ent.net..
read from 0x6ae470 [0x6af050] (4096 bytes => 28 (0x1C))
0000 - 32 35 30 2d 6d 79 73 65-72 76 65 72 20 48 65 6c 250-myserver Hel
0010 - 6c 6f 20 6d 79 63 6c 69-65 6e 74 0a lo myclient.
read from 0x6ae470 [0x6af050] (4096 bytes => 13 (0xD))
0000 - 32 35 30 20 53 54 41 52-54 54 4c 53 0a 250 STARTTLS.
write to 0x6ae470 [0x7ffdb4aea9e0] (10 bytes => 10 (0xA))
0000 - 53 54 41 52 54 54 4c 53-0d 0a STARTTLS..
read from 0x6ae470 [0x6a13a0] (8192 bytes => 13 (0xD))
0000 - 32 32 30 20 47 6f 20 61-68 65 61 64 0a 220 Go ahead.
write to 0x6ae470 [0x6aefa0] (204 bytes => 204 (0xCC))
0000 - 16 03 01 00 c7 01 00 00-c3 03 03 0e ac 05 35 45 ..............5E
0010 - db 95 f6 a7 37 55 d8 ca-14 d7 5f 8e 6a 62 08 50 ....7U...._.jb.P
0020 - c9 81 b7 55 75 a8 4c 17-c0 a1 53 00 00 76 00 a5 ...Uu.L...S..v..
0030 - 00 a3 00 a1 00 9f 00 6b-00 6a 00 69 00 68 00 39 .......k.j.i.h.9
0040 - 00 38 00 37 00 36 00 88-00 87 00 86 00 85 00 9d .8.7.6..........
0050 - 00 3d 00 35 00 84 00 a4-00 a2 00 a0 00 9e 00 67 .=.5...........g
0060 - 00 40 00 3f 00 3e 00 33-00 32 00 31 00 30 00 9a .#.?.>.3.2.1.0..
0070 - 00 99 00 98 00 97 00 45-00 44 00 43 00 42 00 9c .......E.D.C.B..
0080 - 00 3c 00 2f 00 96 00 41-00 07 00 05 00 04 00 16 .<./...A........
0090 - 00 13 00 10 00 0d 00 0a-00 15 00 12 00 0f 00 0c ................
00a0 - 00 09 00 ff 02 01 00 00-23 00 23 00 00 00 0d 00 ........#.#.....
00b0 - 16 00 14 06 01 06 02 05-01 05 02 04 01 04 02 03 ................
00c0 - 01 03 02 02 01 02 02 00-0f 00 01 01 ............
read from 0x6ae470 [0x6b4500] (7 bytes => -1 (0xFFFFFFFFFFFFFFFF))
write:errno=104
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 80 bytes and written 239 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
---
What I understand from the output my program closes the connection before the TLS handshake can complete. (I also tried using Thunderbird and Claws Mail)
What am I doing wrong here?
PS: I couldn't find any example on how to use GTLsServerConnection in a STARTTLS situation.
Update:
I tried -ssl2, -ssl3, -tls1, -tls1_1, -tls1_2 options of OpenSSL which also don't work.
openssl s_client -connect localhost:10025 -starttls smtp -state
yields:
CONNECTED(00000003)
SSL_connect:before/connect initialization
SSL_connect:SSLv2/v3 write client hello A
SSL_connect:error in SSLv2/v3 read server hello A
write:errno=104
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 100 bytes and written 239 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
---
So the client sends "client hello A", but the server doesn't send a correct "server hello A".
As an alternative you can also try gnutls-cli --crlf --starttls-proto=smtp --port 10025 localhost.
The output from GNUTLS_DEBUG_LEVEL=11 ./vsmtpd is:
gnutls[2]: Enabled GnuTLS logging...
gnutls[2]: Intel SSSE3 was detected
gnutls[2]: Intel AES accelerator was detected
gnutls[2]: Intel GCM accelerator was detected
gnutls[2]: Enabled GnuTLS logging...
gnutls[2]: Intel SSSE3 was detected
gnutls[2]: Intel AES accelerator was detected
gnutls[2]: Intel GCM accelerator was detected
gnutls[3]: ASSERT: x509_b64.c:299
gnutls[9]: Could not find '-----BEGIN RSA PRIVATE KEY'
gnutls[3]: ASSERT: x509_b64.c:299
gnutls[9]: Could not find '-----BEGIN DSA PRIVATE KEY'
gnutls[3]: ASSERT: x509_b64.c:299
gnutls[9]: Could not find '-----BEGIN EC PRIVATE KEY'
gnutls[3]: ASSERT: privkey.c:503
gnutls[2]: Falling back to PKCS #8 key decoding
EHLO openssl.client.net
STARTTLS
gnutls[5]: REC[0xfa67e0]: Allocating epoch #0
gnutls[3]: ASSERT: gnutls_constate.c:586
gnutls[5]: REC[0xfa67e0]: Allocating epoch #1
gnutls[3]: ASSERT: gnutls_buffers.c:1138
gnutls[10]: READ: -1 returned from 0xfa4120, errno=0 gerrno=5
gnutls[3]: ASSERT: gnutls_buffers.c:364
gnutls[3]: ASSERT: gnutls_buffers.c:572
gnutls[3]: ASSERT: gnutls_record.c:1058
gnutls[3]: ASSERT: gnutls_record.c:1179
gnutls[3]: ASSERT: gnutls_buffers.c:1392
gnutls[3]: ASSERT: gnutls_handshake.c:1428
gnutls[3]: ASSERT: gnutls_handshake.c:3098
gnutls[3]: ASSERT: gnutls_db.c:334
TLS handshake failed
Stream is already closed
gnutls[5]: REC[0xfa67e0]: Start of epoch cleanup
gnutls[5]: REC[0xfa67e0]: End of epoch cleanup
gnutls[5]: REC[0xfa67e0]: Epoch #0 freed
gnutls[5]: REC[0xfa67e0]: Epoch #1 freed
The problem is hidden somewhere in the implementation of DataInputStream.
Once I removed it and used the following replacement for read_line () instead, it works just fine.
string? read_line (InputStream input) throws Error {
var buffer = new uint8[1];
var sb = new StringBuilder ();
buffer[0] = '\0';
while (buffer[0] != '\n') {
input.read (buffer);
sb.append_c ((char) buffer[0]);
}
return (string) sb.data;
}
void process_request_plain (InputStream input, OutputStream output) throws Error {
output.write (#"220 $hostname ESMTP $appname\n".data);
string line;
while ((line = read_line (input)) != null) {
stdout.printf ("%s\n", line);
line = line.chomp ();
if (line.substring (0, 5) == "EHLO ") {
output.write (#"250-$hostname Hello $username\n".data);
output.write ("250 STARTTLS\n".data);
}
else if (line == "STARTTLS") {
output.write ("220 Go ahead\n".data);
break;
}
else {
output.write ("502 Command not implemented\n".data);
}
}
}