TLS in Practice

"Theory without practice is empty; practice without theory is blind. In TLS, practice without theory is also a data breach." -- Adapted from Immanuel Kant, by a jaded security consultant

Imagine intercepting a TLS handshake from your staging server and discovering it negotiated TLS 1.0 with RC4 encryption. That is like locking your front door but leaving all the windows open. You killed TLS 1.0 on the main site last quarter, but nobody checked staging. And staging connects to the same database as production. Surprise.


The openssl s_client Swiss Army Knife

The openssl s_client command is the single most useful tool for debugging TLS connections. It is the stethoscope of network security -- before you use fancy scanners, you use this to listen to the heartbeat of a TLS connection.

Basic Connection Test

# Connect and show the full handshake, certificate chain, and session details
openssl s_client -connect www.example.com:443 -servername www.example.com < /dev/null

The -servername flag is critical. It sends the Server Name Indication (SNI) extension, which tells the server which hostname you are requesting. Without it, servers hosting multiple domains will return the default certificate, which may not match. Many debugging sessions have been wasted because someone forgot -servername.

Understanding the Output Line by Line

When you run openssl s_client, the output has several distinct sections. Here is what each one means:

CONNECTED(00000003)

TCP connection established. If you see "Connection refused" or "Connection timed out," the problem is at the network layer, not TLS.

depth=2 C=US, O=DigiCert Inc, CN=DigiCert Global Root G2
verify return:1
depth=1 C=US, O=DigiCert Inc, CN=DigiCert SHA2 Extended Validation Server CA
verify return:1
depth=0 CN=www.example.com
verify return:1

Chain verification walkthrough. depth=0 is the server's certificate (leaf). depth=1 is the intermediate. depth=2 is the root. verify return:1 means each step passed. If any shows verify return:0, the chain is broken -- the error code that follows tells you why.

Certificate chain
 0 s:CN=www.example.com
   i:CN=DigiCert SHA2 Extended Validation Server CA
 1 s:CN=DigiCert SHA2 Extended Validation Server CA
   i:CN=DigiCert Global Root G2

The s: line is the subject, i: is the issuer. Trace the chain: cert 0's issuer should match cert 1's subject. If the chain is incomplete (missing intermediate), you will see verify return:0 with error 20 ("unable to get local issuer certificate").

SSL-Session:
    Protocol  : TLSv1.3
    Cipher    : TLS_AES_256_GCM_SHA384
    ...
    Verify return code: 0 (ok)

The negotiated protocol and cipher suite. Verify return code: 0 (ok) is the final verdict. Common non-zero codes:

CodeMeaningFix
10Certificate has expiredRenew the certificate
18Self-signed certificateAdd CA to trust store or fix the chain
19Self-signed cert in chainA CA cert in the chain is self-signed but not in the trust store
20Unable to get local issuer certificateMissing intermediate; server must send it
21Unable to verify the first certificateSame as 20, but specifically the leaf cert's issuer is missing

Advanced s_client Usage

# Force a specific TLS version
openssl s_client -connect example.com:443 -tls1_2 < /dev/null  # Only TLS 1.2
openssl s_client -connect example.com:443 -tls1_3 < /dev/null  # Only TLS 1.3
# If the server doesn't support the requested version, you'll get a handshake failure

# Test specific cipher suites (TLS 1.2)
openssl s_client -connect example.com:443 -cipher ECDHE-RSA-AES256-GCM-SHA384 < /dev/null

# Test specific cipher suites (TLS 1.3)
openssl s_client -connect example.com:443 -ciphersuites TLS_AES_256_GCM_SHA384 < /dev/null

# Check OCSP stapling support
openssl s_client -connect example.com:443 -status < /dev/null 2>/dev/null \
  | grep -A 10 "OCSP Response"
# "OCSP Response Status: successful" = stapling enabled
# "OCSP response: no response sent" = stapling disabled or not configured

# Connect to a specific IP (testing behind load balancers or CDNs)
openssl s_client -connect 93.184.216.34:443 -servername example.com < /dev/null

# Show all certificates in the chain in PEM format
openssl s_client -connect example.com:443 -showcerts < /dev/null

# Test mutual TLS (client certificate authentication)
openssl s_client -connect api.internal:443 \
  -cert client.crt -key client.key -CAfile ca-bundle.crt < /dev/null

# Check for TLS compression (must be disabled -- CRIME attack)
openssl s_client -connect example.com:443 < /dev/null 2>/dev/null \
  | grep "Compression"
# Must show: "Compression: NONE"

# Send an HTTP request over the TLS connection
echo -e "GET / HTTP/1.1\r\nHost: example.com\r\nConnection: close\r\n\r\n" \
  | openssl s_client -connect example.com:443 -servername example.com -quiet

# Get the server's certificate fingerprint (useful for pinning)
openssl s_client -connect example.com:443 -servername example.com < /dev/null 2>/dev/null \
  | openssl x509 -noout -fingerprint -sha256
Compare three major sites' TLS configurations:

\```bash
for site in google.com github.com cloudflare.com; do
  echo "=== $site ==="
  echo | openssl s_client -connect "$site:443" -servername "$site" 2>/dev/null \
    | grep -E "Protocol|Cipher|Verify return"
  echo
done
\```

Which uses TLS 1.3? Which cipher suites do they prefer? What is the chain depth? Now try adding `-tls1_2` to see what TLS 1.2 cipher suites they offer. You will notice that all three prefer ECDHE key exchange and AEAD ciphers.

Cipher Suites: Choosing Your Weapons

A cipher suite is a combination of four cryptographic algorithms that together provide all the security properties of a TLS connection. Choosing the wrong combination can make an otherwise correct TLS deployment vulnerable.

Anatomy of a Cipher Suite Name

graph LR
    subgraph "TLS 1.2 Cipher Suite Name"
        KE["ECDHE<br/>(Key Exchange)"] --> Auth["RSA<br/>(Authentication)"]
        Auth --> Enc["AES256-GCM<br/>(Encryption)"]
        Enc --> Hash["SHA384<br/>(PRF/MAC)"]
    end

For TLS 1.2, the full cipher suite name encodes all four algorithms:

ECDHE  -  RSA  -  AES256-GCM  -  SHA384
  |        |        |              |
  |        |        |              +-- PRF hash / MAC algorithm
  |        |        +-- Bulk cipher (algorithm, key size, mode)
  |        +-- Authentication method (how the server proves identity)
  +-- Key exchange (how client and server agree on a shared secret)

TLS 1.3 simplified the naming because key exchange is now always ephemeral Diffie-Hellman (ECDHE or DHE), and authentication is handled separately:

TLS_AES_256_GCM_SHA384
     |       |     |
     |       |     +-- Hash for HKDF key derivation
     |       +-- AEAD mode (authenticated encryption)
     +-- Encryption algorithm and key size

The Critical Properties

Forward Secrecy (from ECDHE or DHE key exchange) -- Each connection generates a unique ephemeral key pair. Even if the server's long-term private key is stolen later, past recorded traffic cannot be decrypted. Without forward secrecy, an attacker who records encrypted traffic today and steals the server key next year can decrypt everything retroactively.

Authenticated Encryption with Associated Data (AEAD) (from GCM or POLY1305 modes) -- Provides both confidentiality and integrity in a single operation. Non-AEAD modes (like CBC) require separate MAC computation and have been the source of numerous attacks (BEAST, Lucky13, POODLE).

graph TD
    subgraph "STRONG -- Use These"
        T13_1["TLS_AES_256_GCM_SHA384<br/>(TLS 1.3)"]
        T13_2["TLS_CHACHA20_POLY1305_SHA256<br/>(TLS 1.3)"]
        T13_3["TLS_AES_128_GCM_SHA256<br/>(TLS 1.3)"]
        T12_1["ECDHE-ECDSA-AES256-GCM-SHA384"]
        T12_2["ECDHE-RSA-AES256-GCM-SHA384"]
        T12_3["ECDHE-ECDSA-CHACHA20-POLY1305"]
        T12_4["ECDHE-RSA-CHACHA20-POLY1305"]
        T12_5["ECDHE-ECDSA-AES128-GCM-SHA256"]
        T12_6["ECDHE-RSA-AES128-GCM-SHA256"]
    end

    subgraph "WEAK -- Never Use"
        W1["RC4<br/>(broken cipher)"]
        W2["DES / 3DES<br/>(broken / slow)"]
        W3["CBC mode<br/>(padding oracle)"]
        W4["RSA key exchange<br/>(no forward secrecy)"]
        W5["MD5 MAC<br/>(broken hash)"]
        W6["EXPORT ciphers<br/>(FREAK attack)"]
        W7["DHE with < 2048-bit groups<br/>(Logjam attack)"]
    end

    style T13_1 fill:#69db7c,color:#000
    style T13_2 fill:#69db7c,color:#000
    style T13_3 fill:#69db7c,color:#000
    style T12_1 fill:#a9e34b,color:#000
    style T12_2 fill:#a9e34b,color:#000
    style T12_3 fill:#a9e34b,color:#000
    style T12_4 fill:#a9e34b,color:#000
    style T12_5 fill:#a9e34b,color:#000
    style T12_6 fill:#a9e34b,color:#000
    style W1 fill:#ff6b6b,color:#fff
    style W2 fill:#ff6b6b,color:#fff
    style W3 fill:#ff6b6b,color:#fff
    style W4 fill:#ff6b6b,color:#fff
    style W5 fill:#ff6b6b,color:#fff
    style W6 fill:#ff6b6b,color:#fff
    style W7 fill:#ff6b6b,color:#fff

Nginx TLS Hardening

# Modern configuration (TLS 1.3 only)
# Use when all clients support TLS 1.3
ssl_protocols TLSv1.3;
ssl_prefer_server_ciphers off;

# Intermediate configuration (TLS 1.2 + 1.3) -- recommended for most sites
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305;
ssl_prefer_server_ciphers off;
# Note: ssl_prefer_server_ciphers is "off" because all the listed ciphers
# are strong, so the client's preference (based on hardware acceleration
# availability) is the right tiebreaker.

# Session management
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:10m;  # ~40,000 sessions
ssl_session_tickets off;  # Disable for forward secrecy (session tickets
                          # reuse the same key, breaking PFS)

# HSTS -- tell browsers to always use HTTPS for this domain
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
# List all cipher suites your OpenSSL installation supports
openssl ciphers -v 'ALL:COMPLEMENTOFALL' | column -t | head -20

# List only the strong cipher suites
openssl ciphers -v 'ECDHE+AESGCM:ECDHE+CHACHA20:!aNULL:!MD5:!DSS' | column -t

# Test which ciphers a specific server accepts
nmap --script ssl-enum-ciphers -p 443 example.com

Forward Secrecy: Why It Matters

What happens without forward secrecy? Without it, all your encrypted traffic has a ticking time bomb attached to it.

sequenceDiagram
    participant Attacker as Passive Attacker<br/>(records all traffic)
    participant Client as Client
    participant Server as Server

    rect rgb(255, 200, 200)
    Note over Client,Server: WITHOUT Forward Secrecy (RSA Key Exchange)
    Client->>Server: ClientHello
    Server->>Client: ServerHello + Certificate (RSA public key)
    Client->>Server: Encrypted premaster secret<br/>(encrypted with server's RSA public key)
    Note over Client,Server: Both derive session keys<br/>from premaster secret
    Client->>Server: Encrypted application data
    Server->>Client: Encrypted application data
    Attacker-->>Attacker: Records everything
    end

    Note over Attacker: Years later: attacker<br/>obtains server's RSA<br/>private key

    Attacker-->>Attacker: Decrypt premaster secret<br/>from recording
    Attacker-->>Attacker: Derive session keys
    Attacker-->>Attacker: Decrypt ALL recorded traffic

    rect rgb(200, 255, 200)
    Note over Client,Server: WITH Forward Secrecy (ECDHE Key Exchange)
    Client->>Server: ClientHello
    Server->>Client: ServerHello + Certificate +<br/>Ephemeral ECDH public key<br/>(signed with server's long-term key)
    Client->>Server: Client's ephemeral ECDH public key
    Note over Client,Server: Both compute shared secret<br/>via ECDH. Ephemeral keys<br/>are DESTROYED after handshake.
    Client->>Server: Encrypted application data
    Server->>Client: Encrypted application data
    Attacker-->>Attacker: Records everything
    end

    Note over Attacker: Years later: attacker<br/>obtains server's RSA<br/>private key

    Attacker-->>Attacker: Cannot derive session keys!<br/>Ephemeral keys are gone.<br/>Past traffic is SAFE.
This is not theoretical. Intelligence agencies have been documented recording encrypted traffic for later decryption -- a strategy called "harvest now, decrypt later." The Snowden documents revealed that the NSA's MUSCULAR program collected encrypted traffic at scale. With quantum computing advancing, traffic encrypted today without forward secrecy may be decryptable within the decade using Shor's algorithm against RSA.

**Always require ECDHE or DHE key exchange.** Static RSA key exchange was removed entirely from TLS 1.3 because the risk was considered unacceptable.

Certificate Pinning

Certificate pinning is a mechanism where an application "remembers" which certificate or public key it expects from a server, rejecting connections even if a different valid certificate is presented. It provides defense against a compromised CA issuing a fraudulent certificate for your domain.

Types of Pinning

Pinning TargetSurvives Cert Renewal?Survives CA Change?Risk Level
Leaf certificate hashNoNoHigh (must update pin on every renewal)
Intermediate CA hashYesNoMedium (breaks if CA changes intermediates)
Public key (SPKI) hashYes (if key reused)Yes (if key reused)Moderate (common approach)
Root CA hashYesNoLow (but least protection)
# Generate the SPKI pin hash for a certificate
openssl x509 -in cert.pem -pubkey -noout \
  | openssl pkey -pubin -outform DER \
  | openssl dgst -sha256 -binary \
  | openssl enc -base64
# Output: YLh1dUR9y6Kja30RrAn7JKnbQG/uEtLMkBgFF2Fuihg=

# Get the SPKI pin for a remote server
openssl s_client -connect example.com:443 -servername example.com < /dev/null 2>/dev/null \
  | openssl x509 -pubkey -noout \
  | openssl pkey -pubin -outform DER \
  | openssl dgst -sha256 -binary \
  | openssl enc -base64

The HPKP Disaster

**HTTP Public Key Pinning (HPKP) -- The Security Feature That Became a Weapon**

HPKP (RFC 7469) was a browser mechanism that let websites publish their certificate pins via an HTTP header:

\```
Public-Key-Pins:
  pin-sha256="base64+primary==";
  pin-sha256="base64+backup==";
  max-age=5184000;
  includeSubDomains
\```

The idea was sound: the server tells the browser "for the next 60 days, only trust connections to my domain if the certificate chain includes one of these specific public keys." This would prevent a compromised CA from issuing a fraudulent certificate that browsers would accept.

But HPKP had catastrophic failure modes that made it more dangerous than the attack it prevented:

1. **Self-denial-of-service**: If you lose the pinned keys and do not have working backup pins, your site becomes permanently inaccessible to any browser that cached the pins. One major site set `max-age` to 60 days, then lost their primary key during a server migration. Their site was unreachable to returning visitors for two months. New visitors worked fine, which made debugging even harder.

2. **RansomPins attack**: An attacker who temporarily compromises a site (XSS, stolen credentials, DNS hijack) could set HPKP headers pinning to the attacker's own keys. Even after the original owner regains full control, browsers that cached the attacker's pins refuse to connect. The attacker demands payment: "send me $50,000 in Bitcoin or your domain stays bricked for 60 days."

3. **Operational nightmare**: Key rotation required maintaining backup pins that you had to generate in advance, store securely, and include in the header before you ever used them. You needed to plan key rotation months ahead. One wrong step = extended outage.

4. **Incompatibility with CDNs**: CDN providers manage certificates on your behalf and may change them without warning. HPKP broke when the CDN rotated its certificates using different keys.

Chrome deprecated HPKP in Chrome 72 (January 2019). Firefox followed. The standard is now effectively dead.

**The fundamental lesson:** Security mechanisms that can permanently break things with a single configuration mistake do not survive contact with real-world operations. The cure was worse than the disease. HPKP's failure mode (complete site unavailability for weeks or months) was more severe than the attack it prevented (CA compromise, which is rare and has other mitigations like CT).

So is pinning dead? HTTP-based pinning in browsers is dead, and deservedly so. But pinning in mobile apps is alive and widely used. The critical difference is that app developers control the update mechanism -- if you brick the pins, you push an app update through the app store. With HPKP, you could not force browsers to forget the old pins; you had to wait for max-age to expire.

Mobile App Certificate Pinning

# Python example: pinning the SPKI hash of a server's public key
import hashlib
import ssl
import socket
import base64

EXPECTED_PINS = {
    "YLh1dUR9y6Kja30RrAn7JKnbQG/uEtLMkBgFF2Fuihg=",  # Primary
    "sRHdihwgkaib1P1gN7SkBGk6Fg3Jh1Kf6HtMYI0ueE=",   # Backup
}

def verify_pin(host, port=443):
    """Connect to host, extract SPKI hash, verify against pins."""
    ctx = ssl.create_default_context()
    with ctx.wrap_socket(socket.socket(), server_hostname=host) as s:
        s.connect((host, port))
        der_cert = s.getpeercert(binary_form=True)

    # Parse the certificate to extract the Subject Public Key Info
    from cryptography import x509
    cert = x509.load_der_x509_certificate(der_cert)
    spki_bytes = cert.public_key().public_bytes(
        encoding=serialization.Encoding.DER,
        format=serialization.PublicFormat.SubjectPublicKeyInfo
    )
    pin = base64.b64encode(hashlib.sha256(spki_bytes).digest()).decode()

    if pin not in EXPECTED_PINS:
        raise ssl.SSLError(f"Certificate pin mismatch! Got: {pin}")
    return True
If you implement certificate pinning in a mobile app:

- **Always include backup pins** for at least one alternate key you control but have not yet deployed
- **Have a remote kill switch** -- a feature flag or remote configuration that can disable pinning without an app update, in case of emergency
- **Test pin rotation** thoroughly in staging before touching production
- **Monitor pin validation failures** -- they might indicate an attack *or* a misconfiguration on your side
- **Set reasonable pin lifetimes** -- if using time-based pinning, keep durations short
- **Pin the CA or intermediate**, not the leaf certificate, to survive normal certificate renewals

Mutual TLS (mTLS)

In standard TLS, only the server presents a certificate. The client verifies the server's identity, but the server has no cryptographic proof of who the client is. In mutual TLS, both sides authenticate with certificates.

sequenceDiagram
    participant Client as Client<br/>(with client certificate)
    participant Server as Server<br/>(requires client auth)

    Client->>Server: ClientHello
    Server->>Client: ServerHello + Server Certificate +<br/>CertificateRequest<br/>(specifies acceptable client CA list)
    Client->>Client: Verify server certificate
    Client->>Server: Client Certificate +<br/>CertificateVerify<br/>(proves possession of client private key)
    Server->>Server: Verify client certificate:<br/>Signed by trusted client CA?<br/>Not expired?<br/>Not revoked?<br/>CN/SAN matches expected identity?
    Note over Client,Server: Both sides authenticated.<br/>Encrypted channel established.
    Client->>Server: Encrypted application data
    Server->>Client: Encrypted application data

When to Use mTLS

  • Service-to-service communication in microservices -- every service proves its identity to every other service it calls
  • API authentication for machine-to-machine communication where API keys are insufficient
  • Zero-trust networks where network location does not imply trust
  • IoT device authentication where devices need machine identity without passwords
  • BeyondCorp-style access as an alternative to VPNs

Setting Up mTLS

# 1. Create a CA specifically for client certificates
openssl req -x509 -newkey ec -pkeyopt ec_paramgen_curve:prime256v1 \
  -keyout client-ca.key -out client-ca.crt \
  -days 3650 -nodes -subj "/CN=Acme Client CA"

# 2. Generate a client key and CSR
openssl req -newkey ec -pkeyopt ec_paramgen_curve:prime256v1 \
  -keyout client.key -out client.csr \
  -nodes -subj "/CN=payment-service/O=Acme Corp/OU=Backend"

# 3. Sign the client certificate with the client CA
openssl x509 -req -in client.csr -CA client-ca.crt -CAkey client-ca.key \
  -CAcreateserial -out client.crt -days 365 -sha256 \
  -extfile <(printf "extendedKeyUsage=clientAuth\nbasicConstraints=CA:FALSE")

# 4. Test the mTLS connection
openssl s_client -connect api.internal:443 \
  -cert client.crt -key client.key -CAfile server-ca.crt < /dev/null

# 5. Test with curl
curl --cert client.crt --key client.key --cacert server-ca.crt \
  https://api.internal/health

Nginx mTLS Configuration

server {
    listen 443 ssl;
    server_name api.internal;

    ssl_certificate     /etc/nginx/ssl/server.crt;
    ssl_certificate_key /etc/nginx/ssl/server.key;

    # Client certificate authentication
    ssl_client_certificate /etc/nginx/ssl/client-ca.crt;  # Trusted client CA
    ssl_verify_client on;           # Require client certs (or "optional" for gradual rollout)
    ssl_verify_depth 2;             # Maximum chain depth for client certs

    # Pass client certificate identity to the application
    proxy_set_header X-Client-Cert-DN     $ssl_client_s_dn;
    proxy_set_header X-Client-Cert-CN     $ssl_client_s_dn_cn;
    proxy_set_header X-Client-Cert-Verify $ssl_client_verify;
}
**Service Meshes and Automatic mTLS**

In Kubernetes, service meshes automate mTLS between all services without any application code changes:

**Istio** deploys Envoy sidecar proxies alongside each pod. The Istio control plane (istiod) acts as a CA, issuing SPIFFE-based X.509 certificates to each workload. The sidecar proxies handle mTLS transparently:
- Certificates are automatically issued when a pod starts
- Default rotation period is 24 hours
- PeerAuthentication policies define which services require mTLS
- AuthorizationPolicy resources control which services can communicate

**Linkerd** takes a similar approach with its own identity system and proxy. It uses mTLS by default for all meshed services with zero configuration.

**The trade-off**: You get strong mutual authentication between all services without modifying application code. The cost is the operational complexity of running the mesh itself, increased memory usage (sidecar per pod), and slight latency from the proxy hop. For most organizations with more than a handful of microservices, the trade-off is worth it.

TLS Termination Architectures

Where you terminate TLS has significant security implications. There are three common architectures, each with different security properties.

graph LR
    subgraph "Architecture 1: TLS Termination"
        C1[Client] -->|HTTPS<br/>encrypted| LB1[Load Balancer<br/>TLS terminates here]
        LB1 -->|HTTP<br/>UNENCRYPTED| B1[Backend Server]
    end

TLS Termination -- The load balancer decrypts all traffic and forwards it to backends in plaintext. Advantages: simplest configuration, LB can inspect and route based on HTTP headers, path-based routing works, WAF rules can inspect request bodies. Disadvantage: traffic between the LB and backend is unencrypted. Anyone who can sniff the internal network segment sees everything.

graph LR
    subgraph "Architecture 2: TLS Re-encryption"
        C2[Client] -->|HTTPS<br/>encrypted| LB2[Load Balancer<br/>TLS terminates + re-encrypts]
        LB2 -->|HTTPS<br/>re-encrypted| B2[Backend Server]
    end

TLS Re-encryption -- The load balancer decrypts, inspects, then establishes a new TLS connection to the backend. There is a brief moment where data is in plaintext in the LB's memory. Advantages: end-to-end encryption (mostly), LB can still inspect traffic. Disadvantage: double the TLS overhead, more complex certificate management (two sets of certs), and technically the LB sees plaintext.

graph LR
    subgraph "Architecture 3: TLS Passthrough"
        C3[Client] -->|HTTPS<br/>encrypted| LB3[Load Balancer<br/>Layer 4 only]
        LB3 -->|HTTPS<br/>same session| B3[Backend Server<br/>TLS terminates here]
    end

TLS Passthrough -- The load balancer operates at Layer 4 (TCP), forwarding encrypted bytes without decrypting them. The backend server handles TLS. Advantages: true end-to-end encryption, LB never sees plaintext. Disadvantages: LB cannot inspect HTTP headers or content, no path-based routing, no HTTP-level load balancing, no WAF inspection at the LB.

The right choice depends on your threat model. If your internal network is segmented and you trust it, termination is fine and simplest. If you operate in a zero-trust environment, consider re-encryption or mTLS between the LB and backends. For the most sensitive workloads (payments, healthcare data), passthrough with TLS handled by the application server gives you the strongest guarantees, at the cost of operational flexibility.


Common TLS Misconfigurations Ranked by Severity

An audit of a financial services company uncovered six years of TLS configuration debt:

1. TLS 1.0 and 1.1 enabled "for compatibility" -- with clients that no longer existed
2. CBC mode cipher suites offered -- vulnerable to BEAST, Lucky13, and POODLE variants
3. OCSP stapling disabled, with the OCSP responder returning errors (nobody had noticed for two years)
4. HSTS header missing entirely, making HTTP downgrade attacks trivial
5. A single wildcard certificate shared across 47 servers, including development laptops
6. That same certificate was also used for their SMTP server, IMAP server, and VPN concentrator -- a compromise of any one server exposed the key for all of them

Any single issue alone might not be catastrophic. Together, they represented a systemic failure to treat TLS configuration as a security control. They had a "TLS works, check the box" mentality instead of a "TLS is configured correctly and hardened" mentality. The remediation took three months.

The Top 10 TLS Misconfigurations

RankMisconfigurationRiskDetection
1Expired certificateComplete outage; security monitoring blind spotsopenssl x509 -checkend 0
2Missing intermediate certificatesWorks on some clients, fails on Android/Java/curlopenssl s_client -showcerts
3TLS 1.0/1.1 enabledBEAST, POODLE, other known attacksnmap --script ssl-enum-ciphers
4No HSTSHTTP downgrade via active MITMcurl -sI check for header
5Weak cipher suites (RC4, DES, CBC)Various cryptographic attackstestssl.sh --ciphers
6No forward secrecy (RSA key exchange)Past traffic decryptable if key stolenopenssl s_client check cipher
7Key reuse across environmentsDev compromise exposes productionCertificate inventory audit
8TLS compression enabledCRIME attack extracts session cookiesopenssl s_client check Compression
9Mixed content (HTTPS page, HTTP resources)Browsers block, breaking functionalityBrowser dev tools console
10Insecure renegotiationMITM injection (CVE-2009-3555)openssl s_client -no_ticket

Testing with testssl.sh

testssl.sh is a comprehensive, open-source TLS testing tool that checks for every known vulnerability and misconfiguration. It is the standard tool for TLS auditing in the security industry.

# Install testssl.sh
git clone --depth 1 https://github.com/drwetter/testssl.sh.git
cd testssl.sh

# Full scan of a server
./testssl.sh example.com

# Test specific aspects
./testssl.sh --protocols example.com          # Protocol support
./testssl.sh --ciphers example.com            # All accepted cipher suites
./testssl.sh --vulnerable example.com         # Check for known vulnerabilities
./testssl.sh --headers example.com            # HTTP security headers
./testssl.sh --server-defaults example.com    # Certificate and server details

# Full scan with JSON output for automation
./testssl.sh --jsonfile results.json --severity HIGH example.com

# Scan an internal server (specify IP directly)
./testssl.sh --ip 10.0.1.50 internal.example.com:443

# Quick check of just the critical issues
./testssl.sh --fast example.com

What testssl.sh Checks

The tool tests for specific named vulnerabilities:

  • Heartbleed (CVE-2014-0160) -- OpenSSL memory leak allowing extraction of server memory contents, including private keys
  • CCS Injection (CVE-2014-0224) -- OpenSSL flaw allowing MITM to downgrade encryption
  • ROBOT (Return Of Bleichenbacher's Oracle Threat) -- RSA key exchange vulnerability
  • CRIME (CVE-2012-4929) -- TLS compression allows session cookie extraction
  • BREACH (CVE-2013-3587) -- HTTP compression variant of CRIME
  • POODLE (CVE-2014-3566) -- SSLv3 CBC padding oracle
  • DROWN (CVE-2016-0800) -- Cross-protocol attack using SSLv2 to decrypt TLS
  • LOGJAM (CVE-2015-4000) -- DHE with small groups allows downgrade
  • BEAST (CVE-2011-3389) -- TLS 1.0 CBC IV chaining attack
  • LUCKY13 (CVE-2013-0169) -- CBC timing side-channel

SSL Labs (Qualys)

For public-facing servers, SSL Labs provides a comprehensive web-based test with a letter grade:

# Submit a scan via the SSL Labs API
curl "https://api.ssllabs.com/api/v3/analyze?host=example.com&publish=off&all=done"

# The web interface at https://www.ssllabs.com/ssltest/ provides
# a detailed report with a grade from A+ through F
# A+ requires: TLS 1.2+, strong ciphers, HSTS, no vulnerabilities
Run testssl.sh against your staging server and fix every finding:

\```bash
./testssl.sh --severity HIGH staging.example.com 2>&1 | tee staging-audit.txt
\```

Then use the Mozilla SSL Configuration Generator to fix what testssl.sh found:
https://ssl-config.mozilla.org/

This tool generates recommended TLS configurations for Nginx, Apache, HAProxy, AWS ALB, and more, based on your compatibility requirements (Modern, Intermediate, or Old).

TLS Debugging Workflow

When TLS is not working, follow this systematic approach instead of guessing:

flowchart TD
    A[TLS Connection Fails] --> B{Can you reach the port?}
    B -->|No: Connection refused/timeout| C[Check: Is the service running?<br/>Firewall rules? Security groups?<br/>nc -zv host 443]
    B -->|Yes: Connection established| D{Does TLS handshake complete?}
    D -->|No: Handshake failure| E{What error?}
    E -->|Protocol mismatch| F[Client and server have no<br/>common TLS version.<br/>Check ssl_protocols config]
    E -->|Cipher mismatch| G[No common cipher suite.<br/>Check ssl_ciphers config]
    E -->|Unknown CA| H[Server cert not trusted.<br/>Check --CAfile or trust store]
    D -->|Yes: Handshake succeeds| I{Certificate verification OK?}
    I -->|Error 10: Expired| J[Renew the certificate]
    I -->|Error 18/19: Self-signed| K[Add CA to trust store<br/>or fix the chain]
    I -->|Error 20/21: Missing issuer| L[Server not sending intermediate.<br/>Fix ssl_certificate to use fullchain]
    I -->|Error: Hostname mismatch| M[SAN does not include<br/>requested hostname.<br/>Check with: openssl x509 -ext subjectAltName]
    I -->|Verify return code: 0 ok| N{Application works?}
    N -->|No| O[TLS is fine.<br/>Check application layer:<br/>HTTP status codes, headers, routing]
    N -->|Yes| P[Connection working correctly]
# Step 1: Can you reach the port?
nc -zv example.com 443
# Connection to example.com 443 port [tcp/https] succeeded!

# Step 2: Does TLS handshake complete?
openssl s_client -connect example.com:443 -servername example.com < /dev/null 2>&1 | tail -5

# Step 3: What does the error say?
openssl s_client -connect example.com:443 -servername example.com \
  -verify_return_error < /dev/null 2>&1 | grep -E "Verify|error|alert"

# Step 4: Inspect the certificate itself
echo | openssl s_client -connect example.com:443 -servername example.com 2>/dev/null \
  | openssl x509 -noout -subject -issuer -dates -ext subjectAltName

# Step 5: Check the full chain
openssl s_client -connect example.com:443 -servername example.com \
  -showcerts < /dev/null 2>&1 | grep -E "^[ ]*[0-9] s:|i:|depth|Verify"

# Step 6: Capture the handshake for detailed analysis
sudo tcpdump -i eth0 -w tls-debug.pcap host example.com and port 443 -c 100
# Open in Wireshark, apply filter: tls.handshake

TLS 1.3 Differences

Is TLS 1.3 just a faster version of 1.2? No -- TLS 1.3 is a fundamentally better protocol. The IETF did not just add features -- they removed dangerous ones. It took four years and 28 drafts to finalize because removing features from an internet protocol is much harder than adding them.

FeatureTLS 1.2TLS 1.3
Handshake round-trips2 RTT1 RTT (0-RTT resumption optional)
Key exchangeRSA or ECDHEECDHE only (forward secrecy mandatory)
Available cipher suites~300 possible combinations5 (all AEAD)
Static RSA key exchangeSupportedRemoved
CBC mode ciphersSupportedRemoved
TLS compressionSupported (CRIME vuln.)Removed
RenegotiationSupported (complex, error-prone)Removed
Session resumptionSession IDs / Session ticketsPre-Shared Key (PSK)
Handshake encryptionPlaintext until FinishedEncrypted after ServerHello
Certificate visibilitySent in cleartextEncrypted (observer cannot see which cert)
**0-RTT Resumption in TLS 1.3**

TLS 1.3 supports 0-RTT (zero round-trip time) resumption, where a client that has previously connected can send application data in its very first message. This is excellent for performance -- the user perceives zero additional latency from TLS.

The security trade-off: 0-RTT data is replayable. An attacker who captures the 0-RTT message can replay it to the server. If that message contains "transfer $100 to Alice," the replay causes a second transfer.

Mitigations:
- **Only use 0-RTT for idempotent requests** (GET, HEAD -- not POST, PUT, DELETE)
- **Servers should implement anti-replay mechanisms** (strike registers or time-based filters)
- **Many security-conscious deployments disable 0-RTT entirely**

\```nginx
# Disable 0-RTT in nginx (recommended for anything handling state changes)
ssl_early_data off;

# If enabled, the application must check the Early-Data header:
# proxy_set_header Early-Data $ssl_early_data;
# The app should reject non-idempotent requests when Early-Data: 1
\```

Security Headers Beyond TLS

Proper TLS configuration is necessary but not sufficient for transport security. These HTTP headers complement TLS:

# Check security headers on a site
curl -sI https://example.com | grep -iE "strict-transport|content-security|x-frame|x-content|referrer"
# Essential security headers for any HTTPS site
# HSTS: Force HTTPS for all future visits (2 years)
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;

# Prevent MIME type sniffing (stops browsers from "guessing" content types)
add_header X-Content-Type-Options "nosniff" always;

# Prevent clickjacking (disallow embedding in frames)
add_header X-Frame-Options "DENY" always;

# Control what information is sent in the Referer header
add_header Referrer-Policy "strict-origin-when-cross-origin" always;

# Restrict access to browser APIs (camera, microphone, geolocation)
add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;

HSTS is particularly important. Without HSTS, a user's first visit to your site might be over HTTP (if they type example.com without https://). An active attacker (at a coffee shop, malicious ISP, compromised router) can intercept this HTTP request and serve a fake page or redirect to a phishing site. HSTS tells the browser: "never connect to this domain over HTTP, ever." After the first successful HTTPS visit, the browser remembers the HSTS policy and refuses to use HTTP.

The preload directive goes further: if you submit your domain to the HSTS Preload List (hstspreload.org), browsers will ship with your domain hardcoded to always use HTTPS, even on the very first visit.


What You've Learned

This chapter gave you the practical skills to debug, configure, and harden TLS in production:

  • openssl s_client is your primary debugging tool -- it reveals protocol versions, cipher suites, certificate chains, OCSP stapling status, and verification errors with specific error codes
  • Cipher suite selection requires ECDHE for forward secrecy and AEAD modes (GCM, POLY1305); remove all CBC, RC4, and static RSA cipher suites
  • Forward secrecy (ECDHE) ensures that stolen server keys cannot decrypt previously recorded traffic -- critical given "harvest now, decrypt later" strategies
  • Certificate pinning is dead in browsers (HPKP) but essential in mobile apps; always include backup pins and a remote kill switch
  • Mutual TLS provides strong bidirectional authentication for service-to-service communication; service meshes automate it at scale with automatic certificate rotation
  • TLS termination architectures (termination, re-encryption, passthrough) trade off operational flexibility against end-to-end encryption guarantees
  • testssl.sh and SSL Labs provide comprehensive TLS auditing covering all known vulnerabilities
  • TLS 1.3 removes entire categories of attacks by eliminating dangerous features (CBC, static RSA, compression) and making forward secrecy mandatory
  • Systematic debugging follows a flowchart from network connectivity to protocol negotiation to certificate verification to application-layer issues

Now go fix that staging server. And add it to the monitoring. The "TLS works, checkbox complete" mentality is the enemy of security. Configuration is a spectrum, and your job is to push it toward the strong end and keep it there as new vulnerabilities are discovered and old assumptions are broken.