Monday, February 9, 2026

How (and Why) NIST Changed Password Guidance

For decades, password policy has been one of the most visible—and most frustrating—elements of information security. Users were trained to expect a familiar set of rules: passwords must be short but complex, filled with symbols and numbers, and changed every few months whether there was a problem or not. These requirements became deeply embedded in organizational policy, compliance frameworks, and even certification curricula. Yet in recent years, the National Institute of Standards and Technology (NIST) has fundamentally rethought this approach. Drawing on real-world breach data, attacker behavior, and human factors research, NIST has shifted away from complexity-driven, high-friction password rules toward guidance that emphasizes length, usability, and compromise awareness. Understanding why this change occurred is essential for security professionals, educators, and policymakers alike.

For years, the dominant “best practice” for passwords in enterprises and government looked like this:

  • Minimum length (often 8 characters)
  • Mandatory complexity (uppercase + lowercase + number + symbol)
  • Frequent expiration (often every 60–90 days)
  • Little tolerance for “simple” phrases or long, memorable passphrases

That approach became so normal that many people assumed it was the NIST position. In reality, a lot of the most rigid implementations were driven by a mix of organizational policy, compliance checklists, and older interpretations of authentication guidance—not necessarily because “users must rotate every 90 days” was the best security outcome.

Over the last several years, NIST’s Digital Identity Guidelines (SP 800-63 series) have steadily pushed the industry away from “complex and frequently changed” passwords and toward a model that emphasizes length, usability, and compromise-awareness—plus stronger overall controls around authentication.

As of August 1, 2025, NIST SP 800-63-4 supersedes SP 800-63-3 (including SP 800-63B).


The “Former” Worldview: Complexity Rules + Routine Changes

Composition rules (complexity) as a proxy for strength

Older guidance and the policies derived from it often treated password strength as something you could “force” by requiring character-class diversity. NIST’s earlier Electronic Authentication Guideline (SP 800-63 v1.0, archived) even describes systems that require a mix of upper/lowercase, numbers, and special characters as part of a composition-and-entropy model.

The underlying assumption was straightforward:

  • If you make passwords look random-ish, they’ll resist guessing longer.
  • If you ban “dictionary words,” you’ll stop trivial passwords.
  • If you rotate them, you’ll limit the time an attacker can use a stolen password.


Periodic password changes to limit exposure

Older NIST guidance didn’t always say “rotate every 90 days” in the simplistic way many organizations implemented, but it did discuss password lifetimes and scenarios where changing secrets periodically limits attacker opportunity. For example, the archived SP 800-63 v1.0 describes targeted guessing assumptions tied to password lifetime and gives examples such as changing passwords every two years (and even references longer lifetimes like ten years in a specific attack-mitigation example).

In practice, many organizations collapsed these ideas into a blunt rule: rotate frequently—and 60–90 days became a common default.

The real-world outcome: users optimize for survival, not security

If you’ve taught Security+ or Network+ students, you’ve seen this pattern repeatedly: when users must invent new complex passwords on a schedule, they respond predictably:

  • incremental changes (Spring2026! → Summer2026!)
  • predictable patterns (Password1! → Password2!)
  • password reuse across systems
  • writing passwords down or storing them insecurely

These behaviors reduce effective entropy and often make the “new” password easier to guess once an attacker has seen the “old” one.


The “New” NIST Model: Length, Screening, and Changes Only When Warranted

NIST’s current password guidance lives primarily in SP 800-63B (Authentication and Lifecycle Management), including the updated SP 800-63B-4 publication.

Length (and passphrases) over composition rules

Modern NIST guidance explicitly rejects the idea that systems should require mixtures of character types as a rule.

In SP 800-63B (rev 3), NIST states: “No other complexity requirements for memorized secrets SHOULD be imposed.”

In SP 800-63B-4, NIST is even more direct: verifiers/CSPs shall not impose composition rules like requiring mixtures of different character types.

Why? Because the evidence from breached password datasets and real attacker tooling shows that composition rules often don’t create truly unpredictable passwords—they create predictable complexity. Attackers know the tricks (capital first letter, symbol at end, digit substitutions).

Stop forcing routine password expiration

This is one of the most visible changes. NIST’s position now is:

  • Do not require periodic changes (i.e., arbitrary expiration)
  • Do force a change when there is evidence of compromise

SP 800-63B-4 states verifiers/CSPs shall not require subscribers to change passwords periodically, but shall force a change when compromise is suspected or confirmed.

NIST’s FAQ makes the same point plainly, quoting SP 800-63B Section 5.1.1.2: verifiers should not require arbitrary (periodic) changes, but shall force a change if there is evidence of compromise.

Add “compromise awareness”: block known-bad passwords

This is a crucial shift in thinking: instead of trying to manufacture strong passwords through composition constraints, NIST focuses on preventing the most common real-world failure mode—users choosing passwords that are already known to attackers.

SP 800-63B requires checking chosen passwords against blacklists of compromised/common values and rejecting them.

Support password managers and modern UX realities

Your prompt mentions password managers, and that’s consistent with the thrust of the modern guidance: make it practical for users to use long, unique secrets (often via password managers) and stop punishing them with frequent forced changes that produce predictable behavior.


Why NIST Changed: The Threat Model (and the Humans) Changed

NIST’s shift isn’t “soft.” It’s a correction based on how password attacks and user behavior actually work today.

Attackers don’t “guess” like they used to

Modern attacks are dominated by:

  • credential stuffing (reused passwords from breaches)
  • password spraying (common passwords across many accounts)
  • offline cracking against stolen hashes using GPUs/optimized rulesets
  • targeted guessing using known patterns and prior passwords

Composition rules don’t meaningfully stop these. Screening against known-compromised passwords and enforcing sufficient length helps more.

Forced rotation often reduces entropy

When you force changes on a schedule, you create:

  • predictable sequences
  • minor edits
  • more reuse
  • more insecure storage practices

So the policy sounds strong but can reduce real security.

Security is bigger than the password now

NIST’s 800-63B guidance sits in a broader modern authentication strategy: rate limiting, MFA, secure recovery, protection against compromised authenticators, and better lifecycle management—not just “make passwords weirder.”


TL;DR: Password Policy Takeaway

  • The old “complex + rotate often” mindset tried to force security through rules that users could comply with only by becoming predictable.
  • The new NIST guidance is evidence-driven: longer is better than weirder, don’t rotate without cause, and block known-compromised passwords.
  • Passwords remain a weak link, so the win comes from combining better password policy with stronger authentication controls and better user enablement (like password managers).

Wrapping it All Up:

NIST’s revised password guidance reflects a broader evolution in cybersecurity thinking: effective security is not achieved by punishing users with rigid rules, but by aligning controls with real threats and real human behavior. The move away from frequent forced changes and arbitrary complexity is not a relaxation of standards—it is a refinement based on evidence. Long, memorable passphrases, protection against known-compromised passwords, and changes triggered by actual risk produce stronger outcomes than policies rooted in outdated assumptions. For organizations updating standards, instructors teaching security fundamentals, or professionals revisiting long-held beliefs, NIST’s modern guidance sends a clear message: better passwords come from smarter design, not stricter rituals.


References and Further Reading

Primary NIST sources

  • NIST SP 800-63 Digital Identity Guidelines (overview page)
    https://pages.nist.gov/800-63-4/
    Landing page for the full SP 800-63-4 Digital Identity Guidelines, including authentication, federation, and lifecycle management.
  • NIST SP 800-63B – Digital Identity Guidelines: Authentication and Lifecycle Management
    https://pages.nist.gov/800-63-4/sp800-63b.html
    Authoritative source for NIST’s current password (memorized secret) requirements, including length, screening, and change rules.
  • NIST SP 800-63 Digital Identity Guidelines — FAQ
    https://pages.nist.gov/800-63-FAQ/
    This FAQ includes answers related to the recommendation against periodic password expiration and other modern password practices.

 

Tuesday, January 20, 2026

VxLAN Explained: Extending Layer 2 for the Modern, Scalable Network

Virtual Extensible LAN (VXLAN) is one of the most important technologies enabling modern data center and cloud networking. As organizations moved from small, static networks to highly virtualized, software-defined, and cloud-scale environments, traditional networking constructs—especially VLANs—began to show their limits. VXLAN was designed specifically to overcome those limitations while preserving familiar Layer 2 semantics.

This article provides a comprehensive overview of VXLAN: what it is, why it exists, how it works at the packet level, and how it compares to traditional VLANs. The goal is to demystify VXLAN while maintaining the technical rigor expected by networking and cybersecurity professionals.


What Is VXLAN?

VXLAN (Virtual Extensible LAN) is a Layer 2 overlay technology that allows Ethernet frames to be encapsulated inside Layer 3 packets. In practical terms, VXLAN enables devices that behave as if they are on the same local Ethernet segment—even when they are separated by routed IP networks.

VXLAN is defined in RFC 7348 and is widely implemented across data center switches, hypervisors, and cloud platforms. It is a foundational technology for modern data center fabrics, software-defined networking (SDN), and multi-tenant cloud infrastructure.

At its core, VXLAN solves a scalability problem: how to extend Layer 2 networks across large, distributed Layer 3 infrastructures without the operational and architectural drawbacks of traditional Layer 2 extension mechanisms.


 Why VXLAN Exists: The Limitations of VLANs

VLAN Scalability Constraints

VLANs use a 12-bit VLAN ID, which limits the number of unique VLANs to 4,096. While sufficient for small and medium environments, this becomes a hard ceiling in:

  • Large enterprise data centers
  • Multi-tenant cloud environments
  • Service provider networks
  • Containerized and microservices-based architectures

In environments where thousands of tenants, applications, or security zones are required, VLAN exhaustion becomes inevitable.

Layer 2 Sprawl and Instability

Traditional VLAN-based networks rely heavily on Layer 2 constructs such as:

  • Spanning Tree Protocol (STP)
  • Broadcast and unknown unicast flooding
  • MAC address learning across the fabric

As Layer 2 domains grow, they become increasingly fragile, difficult to troubleshoot, and prone to large blast-radius failures.

The Need for Layer 2 Mobility Over Layer 3

Virtualization introduced a new requirement: workload mobility. Virtual machines and containers often need to move between hosts or data centers without changing IP addresses. VLANs struggle to provide this flexibility across routed networks without complex designs or proprietary extensions.

VXLAN was created to address all of these issues.


VXLAN as an Overlay Network

VXLAN is best understood as an overlay network built on top of an existing IP underlay network.

  • Underlay: A standard Layer 3 IP network that provides basic IP connectivity (routing, ECMP, resiliency).
  • Overlay: VXLAN tunnels that carry Layer 2 Ethernet frames across the IP underlay.

This separation of concerns is deliberate:

  • The underlay focuses on fast, stable IP routing.
  • The overlay provides logical Layer 2 connectivity and tenant isolation.

Key VXLAN Components

VXLAN Network Identifier (VNI)

Instead of a 12-bit VLAN ID, VXLAN uses a 24-bit VXLAN Network Identifier (VNI).

  • Maximum VNIs: ~16 million
  • Each VNI represents a logical Layer 2 segment
  • Multiple VNIs can coexist over the same physical infrastructure

This is the primary reason VXLAN scales so well compared to VLANs.

VXLAN Tunnel Endpoints (VTEPs)

A VTEP is the device that performs VXLAN encapsulation and decapsulation. VTEPs can be:

  • Physical switches
  • Virtual switches (e.g., in hypervisors)
  • Software routers or gateways

Each VTEP has:

  • One or more IP addresses on the underlay network
  • Knowledge of which local MAC addresses belong to which VNI

How VXLAN Works: Packet Encapsulation in Detail

One of the most important aspects of VXLAN is how it encapsulates data. Understanding this process clarifies both its power and its overhead.

Key VxLAN Components

 

Step-by-Step Encapsulation

  1. Original Ethernet Frame
    • Source MAC
    • Destination MAC
    • EtherType
    • Payload (e.g., IP, TCP, application data)
  2. VXLAN Header
    • 8 bytes in length
    • Contains the 24-bit VNI
    • Includes flags indicating a valid VXLAN packet
  3. UDP Header
    • Destination port: UDP 4789 (standard VXLAN port)
    • Source port: dynamically chosen (used for ECMP hashing)
  4. Outer IP Header
    • Source IP: VTEP IP address
    • Destination IP: remote VTEP IP address
    • Enables routing across the underlay network
  5. Outer Ethernet Header
    • Source and destination MAC addresses for the physical next hop

 

VxLAN Encapsulation and Headers

 

The result is a fully routable IP packet that can traverse any IP network, while still carrying an intact Layer 2 frame inside.


Protocols Used by VXLAN

VXLAN intentionally leverages existing, well-understood protocols:

UDP

VXLAN uses UDP as its transport mechanism. This choice provides several advantages:

  • Compatibility with existing IP networks
  • Support for Equal-Cost Multi-Path (ECMP) routing
  • Simplified hardware offload in switches and NICs

VXLAN itself does not require TCP reliability because Ethernet already assumes an unreliable transport.

IP (IPv4 or IPv6)

The outer IP header allows VXLAN traffic to traverse any routed network, including:

  • Spine-leaf data center fabrics
  • WAN links
  • Cloud provider backbones

VXLAN works equally well over IPv4 and IPv6 underlays.

Control Plane Options

VXLAN can operate in two primary modes:

  • Flood-and-learn (data-plane learning)
  • Control-plane driven (e.g., BGP EVPN)

Modern deployments overwhelmingly favor BGP EVPN, which provides:

  • Scalable MAC and IP address distribution
  • Reduced flooding
  • Integrated Layer 2 and Layer 3 services

VXLAN vs. VLAN: Similarities and Differences

Similarities

  • Both provide Layer 2 segmentation
  • Both allow logical separation of traffic
  • Both preserve Ethernet semantics (MAC-based forwarding)
  • Both can be used for security zoning and traffic isolation

Key Differences

Comparison

VLAN

VXLAN

Identifier size

12-bit VLAN ID

24-bit VNI

Max segments

4,096

~16 million

Transport

Native Ethernet

Encapsulated over IP

Scalability

Limited

Massive

Dependency

Layer 2 adjacency

Layer 3 routed underlay

Multi-tenancy

Constrained

Designed for it

 

In short, VLANs are simple and effective for smaller, localized networks, while VXLAN is engineered for scale, resilience, and cloud-native architectures.


Why VXLAN Is Used in Modern Networks

Data Center Fabrics

VXLAN is a cornerstone of spine-leaf architectures, enabling:

  • Any-to-any connectivity
  • Large Layer 2 domains without STP
  • Predictable performance and fault isolation

Cloud and Multi-Tenant Environments

Public and private cloud providers rely on VXLAN to:

  • Isolate tenants securely
  • Provide overlapping IP address spaces
  • Rapidly provision and deprovision networks

Virtualization and Workload Mobility

VXLAN allows virtual machines and containers to move freely across hosts and racks while maintaining IP and MAC consistency—critical for application availability and disaster recovery.


VXLAN and Security Considerations

While VXLAN itself is not a security protocol, it has important security implications:

  • Segmentation: VNIs provide strong logical isolation
  • Visibility challenges: Encapsulation can obscure traffic from legacy security tools
  • Encryption: VXLAN does not encrypt payloads; IPsec or MACsec must be layered on if confidentiality is required

From a Zero Trust or modern security architecture perspective, VXLAN is often paired with identity-based controls, microsegmentation, and distributed firewalls.


VXLAN in Context: Evolution, Not Replacement

It is important to emphasize that VXLAN does not eliminate VLANs. In most real-world designs:

  • VLANs are still used locally on access ports
  • VXLAN extends those VLANs logically across routed networks
  • Gateways map VLANs to VNIs at the fabric edge

VXLAN is therefore an evolutionary technology, not a wholesale rejection of Ethernet networking principles.


Conclusion

VXLAN exists because modern networks demand scale, flexibility, and resilience that traditional Layer 2 designs cannot deliver alone. By encapsulating Ethernet frames inside IP packets and using a vastly expanded identifier space, VXLAN allows organizations to extend Layer 2 connectivity across Layer 3 infrastructures in a clean, scalable, and cloud-ready way.

Understanding VXLAN is no longer optional for networking and cybersecurity professionals. It underpins data center fabrics, cloud platforms, SDN solutions, and increasingly, enterprise campus designs. While VLANs remain foundational, VXLAN represents the logical next step in network evolution—preserving what works while removing the constraints that no longer do.

For professionals preparing for modern networking roles, certifications, or architecture design responsibilities, VXLAN is a concept worth mastering—not just as a protocol, but as a design philosophy aligned with how networks are built today.

Monday, January 5, 2026

What Is an Operating System?

Happy New Year! I wanted to start off the new year by getting back to basics with a topic that many of my students frequently ask about: operating systems. Understanding how operating systems work provides the foundation for learning networking, cybersecurity, troubleshooting, and nearly every other area of modern computing.

An Operating System (OS) is system software that manages computer hardware, software resources, and provides services for computer programs. It acts as an intermediary between applications and the hardware, ensuring that programs run correctly and efficiently while enforcing rules for resource sharing, security, and stability.


Major Components of an Operating System

An OS typically includes the following components:

  1. Kernel – The core part of the OS that directly manages hardware and system resources.
  2. Device Drivers – Specialized modules that allow the OS to communicate with hardware devices.
  3. Process Management – Controls program execution, scheduling, and multitasking.
  4. Memory Management – Allocates RAM to applications and ensures isolation between processes.
  5. File System Management – Manages how data is stored, retrieved, and organized on storage devices.
  6. I/O System – Handles communication between the OS and peripherals like keyboards, displays, printers, and storage.
  7. Security & Access Control – Ensures user authentication, permission enforcement, and system integrity.
  8. User Interface – Provides a way for humans to interact with the system (Command Line Interfaces, GUIs, APIs).

The Kernel in Detail

The kernel is the foundation of the OS. It runs in privileged mode (kernel mode), giving it unrestricted access to system resources. Applications, on the other hand, run in user mode with restricted privileges.

Core Responsibilities of the Kernel:

  1. Process Management – Creates, schedules, and terminates processes. It decides which process gets CPU time and for how long.
  2. Memory Management – Allocates memory to processes, tracks usage, handles virtual memory, and manages swapping between RAM and disk.
  3. Device Management – Interfaces with device drivers to control hardware (e.g., disk drives, network interfaces).
  4. File System Access – Provides APIs for file operations like open, read, write, and close.
  5. System Calls – Defines the interface between user applications and the OS kernel (e.g., open(), fork(), read() in Linux).
  6. Security & Protection – Enforces privilege levels, process isolation, and prevents unauthorized memory access.
  7. Networking – Manages network protocols, sockets, and packet transmission.

 

Anatomy of Linux system call in ARM64 | East River Village

 

Kernel Interaction with Other Components

  • Applications request services via system calls.
  • Kernel processes the request and interacts with hardware (via device drivers).
  • Results are returned to the application in user space.

Types of Kernels

  1. Monolithic Kernel – Large kernel where all OS services (memory, file systems, device drivers) run in kernel mode (e.g., Linux).
  2. Microkernel – Minimal kernel with only essential functions (process and memory management), while drivers and services run in user space (e.g., Minix, QNX).
  3. Hybrid Kernel – A mix of both, providing microkernel architecture but with some monolithic elements for performance (e.g., Windows NT, macOS XNU).

Kernel Comparison: Windows vs. macOS vs. Linux

1. Windows (NT Kernel)

  • Type: Hybrid kernel.
  • Architecture: Combines microkernel principles with monolithic performance.
  • Features:
    • Handles subsystems for POSIX, Win32, and .NET.
    • Uses the Hardware Abstraction Layer (HAL) for portability.
    • Strong focus on backward compatibility with legacy software.
  • Strengths: Rich driver support, backward compatibility, strong enterprise integration.
  • Weaknesses: Larger attack surface due to complexity; more closed-source.

2. macOS (XNU Kernel)

  • Type: Hybrid kernel (XNU = “X is Not Unix”).
  • Architecture: Combination of Mach (microkernel) + BSD (Unix subsystem) + Apple I/O Kit.
  • Features:
    • Mach provides inter-process communication (IPC), memory management.
    • BSD layer provides POSIX compliance, file systems, and networking.
    • I/O Kit supports modular device drivers.
  • Strengths: Unix-based stability, good memory protection, modern design.
  • Weaknesses: Limited hardware support compared to Linux/Windows.

3. Linux (Monolithic Kernel with Modular Extensions)

  • Type: Monolithic (but modular).
  • Architecture:
    • Device drivers, file systems, networking stacks all run in kernel mode.
    • Supports dynamically loadable kernel modules.
  • Features:
    • Rich system call interface.
    • Broad hardware and platform support (phones, servers, supercomputers).
    • Open source—highly customizable.
  • Strengths: High performance, flexibility, large community, transparency.
  • Weaknesses: Complexity for beginners, fragmentation across distributions.

 

A diagram of a computer system

AI-generated content may be incorrect.

 


Similarities Between the Kernels

  • All manage processes, memory, file systems, devices, and networking.
  • All provide APIs/system calls for applications.
  • All enforce security boundaries between user mode and kernel mode.
  • All support multitasking and multiprocessor hardware.

Differences Between the Kernels

  • Design Philosophy:
    • Windows/macOS → hybrid (balance performance with modularity).
    • Linux → monolithic but modular (simplicity + performance).
  • Openness:
    • Windows/macOS → closed source.
    • Linux → open source.
  • Portability:
    • Linux → runs on everything from phones to supercomputers.
    • Windows/macOS → limited to vendor-supported hardware.
  • Driver Models:
    • Linux → community-driven and modular.
    • Windows → vendor-driven and highly standardized.
    • macOS → tightly controlled Apple ecosystem.

In summary:
The kernel is the “core manager” of an operating system, bridging software and hardware. While Windows NT and macOS XNU use hybrid kernel designs for flexibility and modularity, Linux uses a powerful monolithic but modular kernel for performance and openness. Despite different philosophies, they all share the same essential role: controlling hardware resources, enforcing security, and enabling applications to run smoothly.