Skip to content
Snippets Groups Projects
  1. Dec 03, 2021
    • Kai-Heng Feng's avatar
      misc: rtsx: Avoid mangling IRQ during runtime PM · 0edeb899
      Kai-Heng Feng authored
      
      After commit 5b4258f6 ("misc: rtsx: rts5249 support runtime PM"), when the
      rtsx controller is runtime suspended, bring CPUs offline and back online, the
      runtime resume of the controller will fail:
      
      [   47.319391] smpboot: CPU 1 is now offline
      [   47.414140] x86: Booting SMP configuration:
      [   47.414147] smpboot: Booting Node 0 Processor 1 APIC 0x2
      [   47.571334] smpboot: CPU 2 is now offline
      [   47.686055] smpboot: Booting Node 0 Processor 2 APIC 0x4
      [   47.808174] smpboot: CPU 3 is now offline
      [   47.878146] smpboot: Booting Node 0 Processor 3 APIC 0x6
      [   48.003679] smpboot: CPU 4 is now offline
      [   48.086187] smpboot: Booting Node 0 Processor 4 APIC 0x1
      [   48.239627] smpboot: CPU 5 is now offline
      [   48.326059] smpboot: Booting Node 0 Processor 5 APIC 0x3
      [   48.472193] smpboot: CPU 6 is now offline
      [   48.574181] smpboot: Booting Node 0 Processor 6 APIC 0x5
      [   48.743375] smpboot: CPU 7 is now offline
      [   48.838047] smpboot: Booting Node 0 Processor 7 APIC 0x7
      [   48.965447] __common_interrupt: 1.35 No irq handler for vector
      [   51.174065] mmc0: error -110 doing runtime resume
      [   54.978088] I/O error, dev mmcblk0, sector 21479 op 0x1:(WRITE) flags 0x0 phys_seg 11 prio class 0
      [   54.978108] Buffer I/O error on dev mmcblk0p1, logical block 19431, lost async page write
      [   54.978129] Buffer I/O error on dev mmcblk0p1, logical block 19432, lost async page write
      [   54.978134] Buffer I/O error on dev mmcblk0p1, logical block 19433, lost async page write
      [   54.978137] Buffer I/O error on dev mmcblk0p1, logical block 19434, lost async page write
      [   54.978141] Buffer I/O error on dev mmcblk0p1, logical block 19435, lost async page write
      [   54.978145] Buffer I/O error on dev mmcblk0p1, logical block 19436, lost async page write
      [   54.978148] Buffer I/O error on dev mmcblk0p1, logical block 19437, lost async page write
      [   54.978152] Buffer I/O error on dev mmcblk0p1, logical block 19438, lost async page write
      [   54.978155] Buffer I/O error on dev mmcblk0p1, logical block 19439, lost async page write
      [   54.978160] Buffer I/O error on dev mmcblk0p1, logical block 19440, lost async page write
      [   54.978244] mmc0: card aaaa removed
      [   54.978452] FAT-fs (mmcblk0p1): FAT read failed (blocknr 4257)
      
      There's interrupt immediately raised on rtsx_pci_write_register() in
      runtime resume routine, but the IRQ handler hasn't registered yet.
      
      So we can either move rtsx_pci_write_register() after rtsx_pci_acquire_irq(),
      or just stop mangling IRQ on runtime PM. Choose the latter to save some
      CPU cycles.
      
      Fixes: 5b4258f6 ("misc: rtsx: rts5249 support runtime PM")
      Cc: stable <stable@vger.kernel.org>
      Signed-off-by: default avatarKai-Heng Feng <kai.heng.feng@canonical.com>
      BugLink: https://bugs.launchpad.net/bugs/1951784
      Link: https://lore.kernel.org/r/20211126003246.1068770-1-kai.heng.feng@canonical.com
      
      
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      0edeb899
    • Ralph Siemsen's avatar
      nvmem: eeprom: at25: fix FRAM byte_len · 9a626577
      Ralph Siemsen authored
      
      Commit fd307a4a ("nvmem: prepare basics for FRAM support") added
      support for FRAM devices such as the Cypress FM25V. During testing, it
      was found that the FRAM detects properly, however reads and writes fail.
      Upon further investigation, two problem were found in at25_probe() routine.
      
      1) In the case of an FRAM device without platform data, eg.
             fram == true && spi->dev.platform_data == NULL
      the stack local variable "struct spi_eeprom chip" is not initialized
      fully, prior to being copied into at25->chip. The chip.flags field in
      particular can cause problems.
      
      2) The byte_len of FRAM is computed from its ID register, and is stored
      into the stack local "struct spi_eeprom chip" structure. This happens
      after the same structure has been copied into at25->chip. As a result,
      at25->chip.byte_len does not contain the correct length of the device.
      In turn this can cause checks at beginning of at25_ee_read() to fail
      (or equally, it could allow reads beyond the end of the device length).
      
      Fix both of these issues by eliminating the on-stack struct spi_eeprom.
      Instead use the one inside at25_data structure, which starts of zeroed.
      
      Fixes: fd307a4a ("nvmem: prepare basics for FRAM support")
      Cc: stable <stable@vger.kernel.org>
      Reviewed-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarRalph Siemsen <ralph.siemsen@linaro.org>
      Link: https://lore.kernel.org/r/20211108181627.645638-1-ralph.siemsen@linaro.org
      
      
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      9a626577
    • Jeya R's avatar
      misc: fastrpc: fix improper packet size calculation · 3a1bf591
      Jeya R authored
      
      The buffer list is sorted and this is not being considered while
      calculating packet size. This would lead to improper copy length
      calculation for non-dmaheap buffers which would eventually cause
      sending improper buffers to DSP.
      
      Fixes: c68cfb71 ("misc: fastrpc: Add support for context Invoke method")
      Reviewed-by: default avatarSrinivas Kandagatla <srinivas.kandagatla@linaro.org>
      Signed-off-by: default avatarJeya R <jeyr@codeaurora.org>
      Link: https://lore.kernel.org/r/1637771481-4299-1-git-send-email-jeyr@codeaurora.org
      
      
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      3a1bf591
  2. Nov 03, 2021
  3. Oct 29, 2021
  4. Oct 25, 2021
  5. Oct 22, 2021
  6. Oct 18, 2021
  7. Oct 15, 2021
  8. Oct 13, 2021
  9. Oct 05, 2021
Loading