PCI device stuck on PCIe 1.0 x1
https://askubuntu.com/questions/1563376/pci-device-stuck-on-pcie-1-0-x1I'm using an external GPU (NVidia T1000 8GB) connected via the ExpressCard slot of my HP EliteBook 8470p. This setup worked well for weeks and I got 30-40 FPS in my games. Today I was tinkering around with configurations, and somehow my PCI link speed is now locked to 1.0 x1 (which caused my FPS in-game to drop to around ~15), even though it was running at 2.0 x1 (or even 2.0 x16, I'm not sure) before, and is also capable of that according to lspci -vvv 02:00.0:
02:00.0 VGA compatible controller: NVIDIA Corporation TU117GL [T1000 8GB] (rev a1) (prog-if 00 [VGA controller])
Subsystem: NVIDIA Corporation TU117GL [T1000 8GB]
Physical Slot: 1
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Interrupt: pin A routed to IRQ 35
Region 0: Memory at d2000000 (32-bit, non-prefetchable) [size=16M]
Region 1: Memory at 440000000 (64-bit, prefetchable) [size=256M]
Region 3: Memory at d0000000 (64-bit, prefetchable) [size=32M]
Region 5: I/O ports at 2000 [size=128]
Expansion ROM at d3000000 [virtual] [disabled] [size=512K]
Capabilities: [60] Power Management version 3
Flags: PMEClk- DSI- D1- D2- AuxCurrent=375mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [68] MSI: Enable+ Count=1/1 Maskable- 64bit+
Address: 00000000fee04004 Data: 0021
Capabilities: [78] Express (v1) Legacy Endpoint, MSI 00
DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s unlimited, L1 <64us
ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+
DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
MaxPayload 128 bytes, MaxReadReq 512 bytes
DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend-
LnkCap: Port #0, Speed 2.5GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <512ns, L1 <4us
ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
ExtSynch- ClockPM+ AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x1 (downgraded)
TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
Capabilities: [100 v1] Virtual Channel
Caps: LPEVC=0 RefClk=100ns PATEntryBits=1
Arb: Fixed- WRR32- WRR64- WRR128-
Ctrl: ArbSelect=Fixed
Status: InProgress-
VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=ff
Status: NegoPending- InProgress-
Capabilities: [258 v1] L1 PM Substates
L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
PortCommonModeRestoreTime=255us PortTPowerOnTime=10us
L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
T_CommonMode=0us LTR1.2_Threshold=0ns
L1SubCtl2: T_PwrOn=10us
Capabilities: [128 v1] Power Budgeting <?>
Capabilities: [420 v2] Advanced Error Reporting
UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
AERCap: First Error Pointer: 00, ECRCGenCap- ECRCGenEn- ECRCChkCap- ECRCChkEn-
MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
HeaderLog: 00000000 00000000 00000000 00000000
Capabilities: [600 v1] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
Capabilities: [900 v1] Null
Capabilities: [bb0 v1] Physical Resizable BAR
BAR 0: current size: 16MB, supported: 16MB
BAR 1: current size: 256MB, supported: 64MB 128MB 256MB
BAR 3: current size: 32MB, supported: 32MB
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia
lspci -vvv 00:1c.3 (which is the bus the GPU is connected on according to lspci -t:
00:1c.3 PCI bridge: Intel Corporation 7 Series/C216 Chipset Family PCI Express Root Port 4 (rev c4) (prog-if 00 [Normal decode])
Subsystem: Hewlett-Packard Company 7 Series/C216 Chipset Family PCI Express Root Port 4
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 64 bytes
Interrupt: pin D routed to IRQ 27
Bus: primary=00, secondary=24, subordinate=24, sec-latency=0
I/O behind bridge: f000-0fff [disabled] [16-bit]
Memory behind bridge: bf200000-bf2fffff [size=1M] [32-bit]
Prefetchable memory behind bridge: 00000000fff00000-00000000000fffff [disabled] [64-bit]
Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16- MAbort- >Reset- FastB2B-
PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
Capabilities: [40] Express (v2) Root Port (Slot+), MSI 00
DevCap: MaxPayload 128 bytes, PhantFunc 0
ExtTag- RBE+
DevCtl: CorrErr- NonFatalErr- FatalErr- UnsupReq-
RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
MaxPayload 128 bytes, MaxReadReq 128 bytes
DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend-
LnkCap: Port #4, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <512ns, L1 <16us
ClockPM- Surprise- LLActRep+ BwNot- ASPMOptComp-
LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x1
TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
SltCap: AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- Surprise-
Slot #3, PowerLimit 10W; Interlock- NoCompl+
SltCtl: Enable: AttnBtn- PwrFlt- MRL- PresDet- CmdCplt- HPIrq- LinkChg-
Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
SltSta: Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock-
Changed: MRL- PresDet- LinkState+
RootCap: CRSVisible-
RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible-
RootSta: PME ReqID 0000, PMEStatus- PMEPending-
DevCap2: Completion Timeout: Range BC, TimeoutDis+ NROPrPrP- LTR-
10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
FRS- LN System CLS Not Supported, TPHComp- ExtTPHComp- ARIFwd-
AtomicOpsCap: Routing- 32bit- 64bit- 128bitCAS-
DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- 10BitTagReq- OBFF Disabled, ARIFwd-
AtomicOpsCtl: ReqEn- EgressBlck-
LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
Compliance Preset/De-emphasis: -6dB de-emphasis, 0dB preshoot
LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
Retimer- 2Retimers- CrosslinkRes: unsupported
Capabilities: [80] MSI: Enable+ Count=1/1 Maskable- 64bit-
Address: fee10004 Data: 0021
Capabilities: [90] Subsystem: Hewlett-Packard Company 7 Series/C216 Chipset Family PCI Express Root Port 4
Capabilities: [a0] Power Management version 2
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
Kernel driver in use: pcieport
nvidia-smi -q reports:
PCI
Bus : 0x02
Device : 0x00
Domain : 0x0000
Base Classcode : 0x3
Sub Classcode : 0x0
Device Id : 0x1FF010DE
Bus Id : 00000000:02:00.0
Sub System Id : 0x161210DE
GPU Link Info
PCIe Generation
Max : 1
Current : 1
Device Current : 1
Device Max : 3
Host Max : 2
Link Width
Max : 16x
Current : 1x
Bridge Chip
Type : N/A
Firmware : N/A
Replays Since Reset : 0
Replay Number Rollovers : 0
Tx Throughput : 0 KB/s
Rx Throughput : 0 KB/s
Atomic Caps Outbound : N/A
Atomic Caps Inbound : N/A
DMESG output for when I connected the card (hotplugged after boot, exactly what was working before):
[ 902.606829] pcieport 0000:00:1c.1: pciehp: Slot(1): Card present
[ 902.606849] pcieport 0000:00:1c.1: pciehp: Slot(1): Link Up
[ 902.735697] pci 0000:02:00.0: [10de:1ff0] type 00 class 0x030000 PCIe Legacy Endpoint
[ 902.735735] pci 0000:02:00.0: BAR 0 [mem 0x00000000-0x00ffffff]
[ 902.735762] pci 0000:02:00.0: BAR 1 [mem 0x00000000-0x0fffffff 64bit pref]
[ 902.735786] pci 0000:02:00.0: BAR 3 [mem 0x00000000-0x01ffffff 64bit pref]
[ 902.735800] pci 0000:02:00.0: BAR 5 [io 0x0000-0x007f]
[ 902.735814] pci 0000:02:00.0: ROM [mem 0x00000000-0x0007ffff pref]
[ 902.736022] pci 0000:02:00.0: PME# supported from D0 D3hot D3cold
[ 902.736256] pci 0000:02:00.0: 2.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x1 link at 0000:00:1c.1 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link)
[ 902.736564] pci 0000:02:00.0: vgaarb: bridge control possible
[ 902.736567] pci 0000:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none
[ 902.736635] pci 0000:02:00.1: [10de:10fa] type 00 class 0x040300 PCIe Endpoint
[ 902.736667] pci 0000:02:00.1: BAR 0 [mem 0x00000000-0x00003fff]
[ 902.737086] pci 0000:02:00.0: BAR 1 [mem 0x440000000-0x44fffffff 64bit pref]: assigned
[ 902.737109] pci 0000:02:00.0: BAR 3 [mem 0xd0000000-0xd1ffffff 64bit pref]: assigned
[ 902.737129] pci 0000:02:00.0: BAR 0 [mem 0xd2000000-0xd2ffffff]: assigned
[ 902.737135] pci 0000:02:00.0: ROM [mem 0xd3000000-0xd307ffff pref]: assigned
[ 902.737138] pci 0000:02:00.1: BAR 0 [mem 0xd3080000-0xd3083fff]: assigned
[ 902.737144] pci 0000:02:00.0: BAR 5 [io 0x2000-0x207f]: assigned
[ 902.737151] pcieport 0000:00:1c.1: PCI bridge to [bus 02-22]
[ 902.737164] pcieport 0000:00:1c.1: bridge window [io 0x2000-0x3fff]
[ 902.737170] pcieport 0000:00:1c.1: bridge window [mem 0xd0000000-0xd3ffffff]
[ 902.737174] pcieport 0000:00:1c.1: bridge window [mem 0x440000000-0x44fffffff 64bit pref]
[ 902.737229] pci 0000:02:00.1: extending delay after power-on from D3hot to 20 msec
[ 902.737271] pci 0000:02:00.1: D0 power state depends on 0000:02:00.0
[ 902.737302] snd_hda_intel 0000:02:00.1: dmic_detect option is deprecated, pass snd-intel-dspcfg.dsp_driver=1 option instead
[ 902.737327] snd_hda_intel 0000:02:00.1: enabling device (0000 -> 0002)
[ 902.737394] snd_hda_intel 0000:02:00.1: Disabling MSI
[ 902.737399] snd_hda_intel 0000:02:00.1: Handle vga_switcheroo audio client
[ 903.292525] nvidia-nvlink: Nvlink Core is being initialized, major device number 511
[ 903.300807] nvidia 0000:02:00.0: enabling device (0000 -> 0003)
[ 903.301057] nvidia 0000:02:00.0: vgaarb: VGA decodes changed: olddecodes=io+mem,decodes=none:owns=none
[ 903.343915] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 580.126.09 Wed Jan 7 22:59:56 UTC 2026
[ 903.364339] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms 580.126.09 Wed Jan 7 22:32:52 UTC 2026
[ 904.241829] [drm] [nvidia-drm] [GPU ID 0x00000200] Loading driver
[ 904.262057] nvidia_uvm: module uses symbols nvUvmInterfaceDisableAccessCntr from proprietary module nvidia, inheriting taint.
[ 904.275112] [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:02:00.0 on minor 0
Note especially this line:
[ 902.736256] pci 0000:02:00.0: 2.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x1 link at 0000:00:1c.1 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link)
Which yesterday used to say 4.000 Gb/s available, limited by 5.0 GT/s [...].
In the BIOS, I made sure that the ExpressCard speed is set to Generation 2.0.
I've tried forcing the PCI link generation and width using setpci but it didn't change anything, neither does putting the GPU under load.
uname -a:
Linux hpelitebook8470p 6.8.4-060804-generic #202501300155 SMP PREEMPT_DYNAMIC Sat Feb 8 14:48:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
Kernel parameters:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_osi=linux pci=nocrs nvidia_drm.modeset=1"
I did not change those from when it was working, though.







