In December 2024, I finally decided to build my first real homelab, aiming to make it as small and powerful as possible. I ended up with this configuration:
- CPU: Intel Core i9-14900K
- Motherboard: ASRock Rack W680D4ID-2T/G5/X550
- RAM: Kingston Server Premier 96GB DDR5-5600 CL46
- GPU: ASRock Intel Arc A380 Low Profile 6GB
- PSU: HDPLEX 250W GaN
- Cooling: Noctua NH-L9x65
- Case: Custom Mod SL3 modified to fit Deep mini-ITX
- Storage:
- 2 x M.2 (Kingston DC2000B 960GB)
- 4 x SATA (Samsung SM863 1.96TB).
I initially didn’t think much about power-saving features, but after receiving the parts and building the system, I decided to optimize idle efficiency. Unfortunately, that was my first mistake. I chose Intel over AMD specifically for better idle efficiency; however, I failed to carefully consider other components.
The first thing you should do is thoroughly evaluate each part you are considering buying, as every component plays a critical role in overall efficiency.
Consider going Intel instead of AMD
To be honest, this was a tough decision. I used to believe AMD dominated in recent generations, especially with Zen 4 and Zen 5 processors. However, Intel has historically led in mobile processors for nearly two decades, achieving significant advancements in power efficiency. Their desktop and mobile processors typically share the same architecture, eliminating the need for desktop-specific adaptations.
Key advantages of Intel Processors:
- Proper Modern Standby (S0ix support): Enables very deep sleep
C-states
for both CPU and chipset. - In-house chipsets: Allow dynamic power gating of components like USB, SATA, PCIe, and audio.
- Monolithic CPU die: No interconnect overhead (AMD desktop processors have multi-die architecture), positively impacting idle performance (common in earlier architectures and some mobile processors).
- big.LITTLE architecture: Intel efficiency cores consume far less power and Linux scheduler utilizes this advantage under low loads.
For example even monolithic AMD CPUs like 8700G still consume around 20W at their deepest idle. Inter on contrary sips only about 4-5W (even with monsters like i9-14900K).
Every PCIe Device Should Support ASPM
Both Intel and AMD platforms won’t utilize deeper C-states if any PCIe device lacks Active-State Power Management (ASPM). To ensure proper power management, always verify ASPM support for motherboard and each PCIe device, including on-board components.
Devices to Check:
- Motherboard
- NVMe drives
- Host adapters (HBA)
- Network adapters (NIC)
- GPUs
- PCIe-to-PCIe bridges
For ASPM to work, it must be enabled for both upstream and downstream PCIe ports. In practice, this means that your motherboard should allow you to enable ASPM for all root ports you are using, and all PCIe devices must support ASPM.
Modern devices such as GPUs and consumer NVMe drives typically support ASPM. However, some older or specialized devices, like network adapters and HBAs, may not. For example, my Intel X550 NIC supported ASPM in firmware 3.20 but had it disabled in 3.60.
Technically, enabling ASPM should be straightforward by enabling it in the platform firmware settings (UEFI). However,
most UEFI implementations do not actively manage ASPM and other power-related parameters themselves. To ensure proper
ASPM support, look for options such as Native ASPM
or similar settings in UEFI to hand over control
to the operating system. Those options commonly enabled by default.
After booting, Linux reads ACPI tables and ensures it can manage PCIe links and ASPM. By default, modern Linux attempts to enable ASPM. However, for various reasons, it might fail to enable ASPM for certain devices. In such cases, you can try switching the ASPM policy settings manually in Linux:
# Attempt to use L1
echo 'powersave' > '/sys/module/pcie_aspm/parameters/policy';
# Attempt to use L1 and L1 substates
echo 'powersupersave' > '/sys/module/pcie_aspm/parameters/policy';
You can also use kernel parameter pcie_aspm.policy
(e.g. pcie_aspm.policy=powersupersave
).
After setting the ASPM policy, you can verify its state using the following command:
lspci -vv | awk '/ASPM/{print $0}' RS= | grep --color -P '(^[a-z0-9:.]+|ASPM )'
In my case, the output looks like this:
00:06.0 PCI bridge: Intel Corporation Raptor Lake PCIe 4.0 Graphics Port (rev 01) (prog-if 00 [Normal decode])
LnkCap: Port #5, Speed 16GT/s, Width x4, ASPM L1, Exit Latency L1 <16us
LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
00:1a.0 PCI bridge: Intel Corporation Alder Lake-S PCH PCI Express Root Port #25 (rev 11) (prog-if 00 [Normal decode])
LnkCap: Port #25, Speed 16GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
00:1b.0 PCI bridge: Intel Corporation Device 7ac0 (rev 11) (prog-if 00 [Normal decode])
LnkCap: Port #17, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk-
00:1c.0 PCI bridge: Intel Corporation Alder Lake-S PCH PCI Express Root Port #1 (rev 11) (prog-if 00 [Normal decode])
LnkCap: Port #1, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
00:1c.7 PCI bridge: Intel Corporation Alder Lake-S PCH PCI Express Root Port #8 (rev 11) (prog-if 00 [Normal decode])
LnkCap: Port #8, Speed 8GT/s, Width x1, ASPM L1, Exit Latency L1 <64us
LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 (prog-if 02 [NVM Express])
LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
02:00.0 Non-Volatile memory controller: Kingston Technology Company, Inc. DC2000B NVMe SSD [E18DC] (rev 01) (prog-if 02 [NVM Express])
LnkCap: Port #0, Speed 16GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
04:00.0 Ethernet controller: Intel Corporation Ethernet Controller X550 (rev 01)
LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <2us, L1 <16us
LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
04:00.1 Ethernet controller: Intel Corporation Ethernet Controller X550 (rev 01)
LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <2us, L1 <16us
LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
05:00.0 PCI bridge: ASRock Incorporation Device 1150 (rev 06) (prog-if 00 [Normal decode])
LnkCap: Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <512ns, L1 <32us
LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
Switching policy is fine, but I don’t personally recommend to force enable ASPM using pcie_aspm=force
, since it can
lead to platform instability. Though you can still try, for example there is an excellent post from z8 –
The Curse of ASPM.
Related BIOS Options
These options helped me enable ASPM on a platform where ASPM support was completely absent. I assume this is because it is a server platform first of all.
Use Enabled
instead of Auto
if it’s possible, since Auto
could usually mean Disabled
.
RC ACPI Settings
PCI Express Native Control
–Enabled
, allows the OS to manage PCI links.Native ASPM
–Enabled
, allows the OS to take control over ASPM.
Chipset Configuration
DMI ASPM Support
–Enabled
, ASPM support from the CPU side of DMI1.
System Agent (SA) Configuration
– Allows management of CPU DMI link and PCIe root ports connected directly to the CPU.DMI/OPI Configuration
DMI ASPM
– Set toL1
orL0sL1
.DMI ASPM Gen3
– Enable if present.
PCI Express Configuration
– Enable ASPM for every port in use; disable unused ports if possible. Below is my config:PCI Express Root Port 1
– The CPU M.2 port.M.2_2 ASPM Support
– Set toL1
.
PCI Express Root Port 2
– The first half of the bifurcated X16 slot.PCIE1 ASPM Support
– Set toL1
.
PCI Express Root Port 3
– The second half of the bifurcated X16 slot.PCI Express Root Port 3
–Disabled
, since the Intel Arc A380 only uses the first eight PCIe lanes.
PCH-IO Configuration
– Allows management of PCH or basically Intel’s Chipset.PCI Express Configuration
– Same as with CPU ones: enable ASPM for every port in use; disable unused ports if possible. Below again is my config:PCH DMI ASPM Support
–Enabled
, ASPM support from the PCH side of DMI.PCI Express Root Port 1
– The Intel X550 NIC port.LAN1_2 ASPM Support
– Set toDisable
since in my case X550 somehow has working ASPM only after disabling it. I hope it’s an UEFI bug, and they just mixed up text and actual option value.
PCI Express Root Port 8
– The ASPEED AST1150 PCIe-to-PCIe bridge (part of IPMI) port.PCH_Port4 ASPM Support
– Set toAuto
.
PCI Express Root Port 25
– The PCH M.2 port.M.2_1 ASPM Support
– Set toAuto
.
In my case, the CPU’s PCI Express Root Port
had the L1 Substates
option grayed out. If you encounter the same issue,
you can try enabling the Enable ClockReq Messaging
option below it. This will activate the L1 Substates
option.
However, in my case, after enabling L1 substates, it limited my platform to the PKG C3 state.
Additionally, disable the Multi-VC
option for all used CPU ports to achieve PKG C8-C10. Disabling this option seems
to help in some cases. However, in my case, any occupied CPU slots limited my platform to the PKG C6 state.
Every PCIe Device Better to Report Proper LTR Values
WIP
Every SATA Disk Should Support DevSlp or Slumber
WIP
DMI is a link to connect CPU and Intel chipset. ↩︎