Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • drm/nouveau
  • karolherbst/nouveau
  • pmoreau/nouveau
  • tessiew490/nouveau
  • mohamexiety/nouveau
  • antoniospg100/nouveau
  • dufresnep/nouveau
  • mbrost/nouveau-drm-scheduler
  • jfouquart/nouveau
  • git-bruh/nouveau
  • bskeggs/nouveau
  • AFilius/nouveau-private-bugfix
  • mhenning/linux
13 results
Show changes
Commits on Source (36)
Showing
with 443 additions and 251 deletions
...@@ -27,6 +27,7 @@ properties: ...@@ -27,6 +27,7 @@ properties:
- mediatek,mt8188-dp-intf - mediatek,mt8188-dp-intf
- mediatek,mt8192-dpi - mediatek,mt8192-dpi
- mediatek,mt8195-dp-intf - mediatek,mt8195-dp-intf
- mediatek,mt8195-dpi
- items: - items:
- enum: - enum:
- mediatek,mt6795-dpi - mediatek,mt6795-dpi
...@@ -35,6 +36,10 @@ properties: ...@@ -35,6 +36,10 @@ properties:
- enum: - enum:
- mediatek,mt8365-dpi - mediatek,mt8365-dpi
- const: mediatek,mt8192-dpi - const: mediatek,mt8192-dpi
- items:
- enum:
- mediatek,mt8188-dpi
- const: mediatek,mt8195-dpi
reg: reg:
maxItems: 1 maxItems: 1
...@@ -116,11 +121,13 @@ examples: ...@@ -116,11 +121,13 @@ examples:
- | - |
#include <dt-bindings/interrupt-controller/arm-gic.h> #include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/clock/mt8173-clk.h> #include <dt-bindings/clock/mt8173-clk.h>
#include <dt-bindings/power/mt8173-power.h>
dpi: dpi@1401d000 { dpi: dpi@1401d000 {
compatible = "mediatek,mt8173-dpi"; compatible = "mediatek,mt8173-dpi";
reg = <0x1401d000 0x1000>; reg = <0x1401d000 0x1000>;
interrupts = <GIC_SPI 194 IRQ_TYPE_LEVEL_LOW>; interrupts = <GIC_SPI 194 IRQ_TYPE_LEVEL_LOW>;
power-domains = <&spm MT8173_POWER_DOMAIN_MM>;
clocks = <&mmsys CLK_MM_DPI_PIXEL>, clocks = <&mmsys CLK_MM_DPI_PIXEL>,
<&mmsys CLK_MM_DPI_ENGINE>, <&mmsys CLK_MM_DPI_ENGINE>,
<&apmixedsys CLK_APMIXED_TVDPLL>; <&apmixedsys CLK_APMIXED_TVDPLL>;
......
...@@ -22,6 +22,9 @@ properties: ...@@ -22,6 +22,9 @@ properties:
oneOf: oneOf:
- enum: - enum:
- mediatek,mt8195-disp-dsc - mediatek,mt8195-disp-dsc
- items:
- const: mediatek,mt8188-disp-dsc
- const: mediatek,mt8195-disp-dsc
reg: reg:
maxItems: 1 maxItems: 1
......
...@@ -67,14 +67,19 @@ Agreed upon design principles ...@@ -67,14 +67,19 @@ Agreed upon design principles
Overview of baseline design Overview of baseline design
=========================== ===========================
Baseline design is simple as possible to get a working basline in which can be .. kernel-doc:: drivers/gpu/drm/drm_gpusvm.c
built upon.
.. kernel-doc:: drivers/gpu/drm/xe/drm_gpusvm.c
:doc: Overview :doc: Overview
.. kernel-doc:: drivers/gpu/drm/drm_gpusvm.c
:doc: Locking :doc: Locking
:doc: Migrataion
.. kernel-doc:: drivers/gpu/drm/drm_gpusvm.c
:doc: Migration
.. kernel-doc:: drivers/gpu/drm/drm_gpusvm.c
:doc: Partial Unmapping of Ranges :doc: Partial Unmapping of Ranges
.. kernel-doc:: drivers/gpu/drm/drm_gpusvm.c
:doc: Examples :doc: Examples
Possible future design features Possible future design features
......
...@@ -23,37 +23,42 @@ ...@@ -23,37 +23,42 @@
* DOC: Overview * DOC: Overview
* *
* GPU Shared Virtual Memory (GPU SVM) layer for the Direct Rendering Manager (DRM) * GPU Shared Virtual Memory (GPU SVM) layer for the Direct Rendering Manager (DRM)
* * is a component of the DRM framework designed to manage shared virtual memory
* The GPU SVM layer is a component of the DRM framework designed to manage shared * between the CPU and GPU. It enables efficient data exchange and processing
* virtual memory between the CPU and GPU. It enables efficient data exchange and * for GPU-accelerated applications by allowing memory sharing and
* processing for GPU-accelerated applications by allowing memory sharing and
* synchronization between the CPU's and GPU's virtual address spaces. * synchronization between the CPU's and GPU's virtual address spaces.
* *
* Key GPU SVM Components: * Key GPU SVM Components:
* - Notifiers: Notifiers: Used for tracking memory intervals and notifying the *
* GPU of changes, notifiers are sized based on a GPU SVM * - Notifiers:
* initialization parameter, with a recommendation of 512M or * Used for tracking memory intervals and notifying the GPU of changes,
* larger. They maintain a Red-BlacK tree and a list of ranges that * notifiers are sized based on a GPU SVM initialization parameter, with a
* fall within the notifier interval. Notifiers are tracked within * recommendation of 512M or larger. They maintain a Red-BlacK tree and a
* a GPU SVM Red-BlacK tree and list and are dynamically inserted * list of ranges that fall within the notifier interval. Notifiers are
* or removed as ranges within the interval are created or * tracked within a GPU SVM Red-BlacK tree and list and are dynamically
* destroyed. * inserted or removed as ranges within the interval are created or
* - Ranges: Represent memory ranges mapped in a DRM device and managed * destroyed.
* by GPU SVM. They are sized based on an array of chunk sizes, which * - Ranges:
* is a GPU SVM initialization parameter, and the CPU address space. * Represent memory ranges mapped in a DRM device and managed by GPU SVM.
* Upon GPU fault, the largest aligned chunk that fits within the * They are sized based on an array of chunk sizes, which is a GPU SVM
* faulting CPU address space is chosen for the range size. Ranges are * initialization parameter, and the CPU address space. Upon GPU fault,
* expected to be dynamically allocated on GPU fault and removed on an * the largest aligned chunk that fits within the faulting CPU address
* MMU notifier UNMAP event. As mentioned above, ranges are tracked in * space is chosen for the range size. Ranges are expected to be
* a notifier's Red-Black tree. * dynamically allocated on GPU fault and removed on an MMU notifier UNMAP
* - Operations: Define the interface for driver-specific GPU SVM operations * event. As mentioned above, ranges are tracked in a notifier's Red-Black
* such as range allocation, notifier allocation, and * tree.
* invalidations. *
* - Device Memory Allocations: Embedded structure containing enough information * - Operations:
* for GPU SVM to migrate to / from device memory. * Define the interface for driver-specific GPU SVM operations such as
* - Device Memory Operations: Define the interface for driver-specific device * range allocation, notifier allocation, and invalidations.
* memory operations release memory, populate pfns, *
* and copy to / from device memory. * - Device Memory Allocations:
* Embedded structure containing enough information for GPU SVM to migrate
* to / from device memory.
*
* - Device Memory Operations:
* Define the interface for driver-specific device memory operations
* release memory, populate pfns, and copy to / from device memory.
* *
* This layer provides interfaces for allocating, mapping, migrating, and * This layer provides interfaces for allocating, mapping, migrating, and
* releasing memory ranges between the CPU and GPU. It handles all core memory * releasing memory ranges between the CPU and GPU. It handles all core memory
...@@ -63,14 +68,18 @@ ...@@ -63,14 +68,18 @@
* below. * below.
* *
* Expected Driver Components: * Expected Driver Components:
* - GPU page fault handler: Used to create ranges and notifiers based on the *
* fault address, optionally migrate the range to * - GPU page fault handler:
* device memory, and create GPU bindings. * Used to create ranges and notifiers based on the fault address,
* - Garbage collector: Used to unmap and destroy GPU bindings for ranges. * optionally migrate the range to device memory, and create GPU bindings.
* Ranges are expected to be added to the garbage collector *
* upon a MMU_NOTIFY_UNMAP event in notifier callback. * - Garbage collector:
* - Notifier callback: Used to invalidate and DMA unmap GPU bindings for * Used to unmap and destroy GPU bindings for ranges. Ranges are expected
* ranges. * to be added to the garbage collector upon a MMU_NOTIFY_UNMAP event in
* notifier callback.
*
* - Notifier callback:
* Used to invalidate and DMA unmap GPU bindings for ranges.
*/ */
/** /**
...@@ -83,9 +92,9 @@ ...@@ -83,9 +92,9 @@
* range RB tree and list, as well as the range's DMA mappings and sequence * range RB tree and list, as well as the range's DMA mappings and sequence
* number. GPU SVM manages all necessary locking and unlocking operations, * number. GPU SVM manages all necessary locking and unlocking operations,
* except for the recheck range's pages being valid * except for the recheck range's pages being valid
* (drm_gpusvm_range_pages_valid) when the driver is committing GPU bindings. This * (drm_gpusvm_range_pages_valid) when the driver is committing GPU bindings.
* lock corresponds to the 'driver->update' lock mentioned in the HMM * This lock corresponds to the ``driver->update`` lock mentioned in
* documentation (TODO: Link). Future revisions may transition from a GPU SVM * Documentation/mm/hmm.rst. Future revisions may transition from a GPU SVM
* global lock to a per-notifier lock if finer-grained locking is deemed * global lock to a per-notifier lock if finer-grained locking is deemed
* necessary. * necessary.
* *
...@@ -102,11 +111,11 @@ ...@@ -102,11 +111,11 @@
* DOC: Migration * DOC: Migration
* *
* The migration support is quite simple, allowing migration between RAM and * The migration support is quite simple, allowing migration between RAM and
* device memory at the range granularity. For example, GPU SVM currently does not * device memory at the range granularity. For example, GPU SVM currently does
* support mixing RAM and device memory pages within a range. This means that upon GPU * not support mixing RAM and device memory pages within a range. This means
* fault, the entire range can be migrated to device memory, and upon CPU fault, the * that upon GPU fault, the entire range can be migrated to device memory, and
* entire range is migrated to RAM. Mixed RAM and device memory storage within a range * upon CPU fault, the entire range is migrated to RAM. Mixed RAM and device
* could be added in the future if required. * memory storage within a range could be added in the future if required.
* *
* The reasoning for only supporting range granularity is as follows: it * The reasoning for only supporting range granularity is as follows: it
* simplifies the implementation, and range sizes are driver-defined and should * simplifies the implementation, and range sizes are driver-defined and should
...@@ -119,11 +128,11 @@ ...@@ -119,11 +128,11 @@
* Partial unmapping of ranges (e.g., 1M out of 2M is unmapped by CPU resulting * Partial unmapping of ranges (e.g., 1M out of 2M is unmapped by CPU resulting
* in MMU_NOTIFY_UNMAP event) presents several challenges, with the main one * in MMU_NOTIFY_UNMAP event) presents several challenges, with the main one
* being that a subset of the range still has CPU and GPU mappings. If the * being that a subset of the range still has CPU and GPU mappings. If the
* backing store for the range is in device memory, a subset of the backing store has * backing store for the range is in device memory, a subset of the backing
* references. One option would be to split the range and device memory backing store, * store has references. One option would be to split the range and device
* but the implementation for this would be quite complicated. Given that * memory backing store, but the implementation for this would be quite
* partial unmappings are rare and driver-defined range sizes are relatively * complicated. Given that partial unmappings are rare and driver-defined range
* small, GPU SVM does not support splitting of ranges. * sizes are relatively small, GPU SVM does not support splitting of ranges.
* *
* With no support for range splitting, upon partial unmapping of a range, the * With no support for range splitting, upon partial unmapping of a range, the
* driver is expected to invalidate and destroy the entire range. If the range * driver is expected to invalidate and destroy the entire range. If the range
...@@ -144,6 +153,8 @@ ...@@ -144,6 +153,8 @@
* *
* 1) GPU page fault handler * 1) GPU page fault handler
* *
* .. code-block:: c
*
* int driver_bind_range(struct drm_gpusvm *gpusvm, struct drm_gpusvm_range *range) * int driver_bind_range(struct drm_gpusvm *gpusvm, struct drm_gpusvm_range *range)
* { * {
* int err = 0; * int err = 0;
...@@ -208,7 +219,9 @@ ...@@ -208,7 +219,9 @@
* return err; * return err;
* } * }
* *
* 2) Garbage Collector. * 2) Garbage Collector
*
* .. code-block:: c
* *
* void __driver_garbage_collector(struct drm_gpusvm *gpusvm, * void __driver_garbage_collector(struct drm_gpusvm *gpusvm,
* struct drm_gpusvm_range *range) * struct drm_gpusvm_range *range)
...@@ -231,7 +244,9 @@ ...@@ -231,7 +244,9 @@
* __driver_garbage_collector(gpusvm, range); * __driver_garbage_collector(gpusvm, range);
* } * }
* *
* 3) Notifier callback. * 3) Notifier callback
*
* .. code-block:: c
* *
* void driver_invalidation(struct drm_gpusvm *gpusvm, * void driver_invalidation(struct drm_gpusvm *gpusvm,
* struct drm_gpusvm_notifier *notifier, * struct drm_gpusvm_notifier *notifier,
...@@ -499,7 +514,7 @@ drm_gpusvm_notifier_invalidate(struct mmu_interval_notifier *mni, ...@@ -499,7 +514,7 @@ drm_gpusvm_notifier_invalidate(struct mmu_interval_notifier *mni,
return true; return true;
} }
/** /*
* drm_gpusvm_notifier_ops - MMU interval notifier operations for GPU SVM * drm_gpusvm_notifier_ops - MMU interval notifier operations for GPU SVM
*/ */
static const struct mmu_interval_notifier_ops drm_gpusvm_notifier_ops = { static const struct mmu_interval_notifier_ops drm_gpusvm_notifier_ops = {
...@@ -2055,7 +2070,6 @@ static int __drm_gpusvm_migrate_to_ram(struct vm_area_struct *vas, ...@@ -2055,7 +2070,6 @@ static int __drm_gpusvm_migrate_to_ram(struct vm_area_struct *vas,
/** /**
* drm_gpusvm_range_evict - Evict GPU SVM range * drm_gpusvm_range_evict - Evict GPU SVM range
* @pagemap: Pointer to the GPU SVM structure
* @range: Pointer to the GPU SVM range to be removed * @range: Pointer to the GPU SVM range to be removed
* *
* This function evicts the specified GPU SVM range. This function will not * This function evicts the specified GPU SVM range. This function will not
...@@ -2146,8 +2160,8 @@ static vm_fault_t drm_gpusvm_migrate_to_ram(struct vm_fault *vmf) ...@@ -2146,8 +2160,8 @@ static vm_fault_t drm_gpusvm_migrate_to_ram(struct vm_fault *vmf)
return err ? VM_FAULT_SIGBUS : 0; return err ? VM_FAULT_SIGBUS : 0;
} }
/** /*
* drm_gpusvm_pagemap_ops() - Device page map operations for GPU SVM * drm_gpusvm_pagemap_ops - Device page map operations for GPU SVM
*/ */
static const struct dev_pagemap_ops drm_gpusvm_pagemap_ops = { static const struct dev_pagemap_ops drm_gpusvm_pagemap_ops = {
.page_free = drm_gpusvm_page_free, .page_free = drm_gpusvm_page_free,
......
...@@ -620,13 +620,16 @@ static void mtk_crtc_update_config(struct mtk_crtc *mtk_crtc, bool needs_vblank) ...@@ -620,13 +620,16 @@ static void mtk_crtc_update_config(struct mtk_crtc *mtk_crtc, bool needs_vblank)
mbox_send_message(mtk_crtc->cmdq_client.chan, cmdq_handle); mbox_send_message(mtk_crtc->cmdq_client.chan, cmdq_handle);
mbox_client_txdone(mtk_crtc->cmdq_client.chan, 0); mbox_client_txdone(mtk_crtc->cmdq_client.chan, 0);
goto update_config_out;
} }
#else #endif
spin_lock_irqsave(&mtk_crtc->config_lock, flags); spin_lock_irqsave(&mtk_crtc->config_lock, flags);
mtk_crtc->config_updating = false; mtk_crtc->config_updating = false;
spin_unlock_irqrestore(&mtk_crtc->config_lock, flags); spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
#endif
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
update_config_out:
#endif
mutex_unlock(&mtk_crtc->hw_lock); mutex_unlock(&mtk_crtc->hw_lock);
} }
......
...@@ -1766,7 +1766,7 @@ static int mtk_dp_parse_capabilities(struct mtk_dp *mtk_dp) ...@@ -1766,7 +1766,7 @@ static int mtk_dp_parse_capabilities(struct mtk_dp *mtk_dp)
ret = drm_dp_dpcd_readb(&mtk_dp->aux, DP_MSTM_CAP, &val); ret = drm_dp_dpcd_readb(&mtk_dp->aux, DP_MSTM_CAP, &val);
if (ret < 1) { if (ret < 1) {
drm_err(mtk_dp->drm_dev, "Read mstm cap failed\n"); dev_err(mtk_dp->dev, "Read mstm cap failed: %zd\n", ret);
return ret == 0 ? -EIO : ret; return ret == 0 ? -EIO : ret;
} }
...@@ -1776,7 +1776,7 @@ static int mtk_dp_parse_capabilities(struct mtk_dp *mtk_dp) ...@@ -1776,7 +1776,7 @@ static int mtk_dp_parse_capabilities(struct mtk_dp *mtk_dp)
DP_DEVICE_SERVICE_IRQ_VECTOR_ESI0, DP_DEVICE_SERVICE_IRQ_VECTOR_ESI0,
&val); &val);
if (ret < 1) { if (ret < 1) {
drm_err(mtk_dp->drm_dev, "Read irq vector failed\n"); dev_err(mtk_dp->dev, "Read irq vector failed: %zd\n", ret);
return ret == 0 ? -EIO : ret; return ret == 0 ? -EIO : ret;
} }
...@@ -2059,7 +2059,7 @@ static int mtk_dp_wait_hpd_asserted(struct drm_dp_aux *mtk_aux, unsigned long wa ...@@ -2059,7 +2059,7 @@ static int mtk_dp_wait_hpd_asserted(struct drm_dp_aux *mtk_aux, unsigned long wa
ret = mtk_dp_parse_capabilities(mtk_dp); ret = mtk_dp_parse_capabilities(mtk_dp);
if (ret) { if (ret) {
drm_err(mtk_dp->drm_dev, "Can't parse capabilities\n"); dev_err(mtk_dp->dev, "Can't parse capabilities: %d\n", ret);
return ret; return ret;
} }
......
...@@ -4,8 +4,10 @@ ...@@ -4,8 +4,10 @@
* Author: Jie Qiu <jie.qiu@mediatek.com> * Author: Jie Qiu <jie.qiu@mediatek.com>
*/ */
#include <linux/bitfield.h>
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/component.h> #include <linux/component.h>
#include <linux/debugfs.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/media-bus-format.h> #include <linux/media-bus-format.h>
...@@ -116,9 +118,15 @@ struct mtk_dpi_yc_limit { ...@@ -116,9 +118,15 @@ struct mtk_dpi_yc_limit {
u16 c_bottom; u16 c_bottom;
}; };
struct mtk_dpi_factor {
u32 clock;
u8 factor;
};
/** /**
* struct mtk_dpi_conf - Configuration of mediatek dpi. * struct mtk_dpi_conf - Configuration of mediatek dpi.
* @cal_factor: Callback function to calculate factor value. * @dpi_factor: SoC-specific pixel clock PLL factor values.
* @num_dpi_factor: Number of pixel clock PLL factor values.
* @reg_h_fre_con: Register address of frequency control. * @reg_h_fre_con: Register address of frequency control.
* @max_clock_khz: Max clock frequency supported for this SoCs in khz units. * @max_clock_khz: Max clock frequency supported for this SoCs in khz units.
* @edge_sel_en: Enable of edge selection. * @edge_sel_en: Enable of edge selection.
...@@ -127,19 +135,24 @@ struct mtk_dpi_yc_limit { ...@@ -127,19 +135,24 @@ struct mtk_dpi_yc_limit {
* @is_ck_de_pol: Support CK/DE polarity. * @is_ck_de_pol: Support CK/DE polarity.
* @swap_input_support: Support input swap function. * @swap_input_support: Support input swap function.
* @support_direct_pin: IP supports direct connection to dpi panels. * @support_direct_pin: IP supports direct connection to dpi panels.
* @input_2pixel: Input pixel of dp_intf is 2 pixel per round, so enable this
* config to enable this feature.
* @dimension_mask: Mask used for HWIDTH, HPORCH, VSYNC_WIDTH and VSYNC_PORCH * @dimension_mask: Mask used for HWIDTH, HPORCH, VSYNC_WIDTH and VSYNC_PORCH
* (no shift). * (no shift).
* @hvsize_mask: Mask of HSIZE and VSIZE mask (no shift). * @hvsize_mask: Mask of HSIZE and VSIZE mask (no shift).
* @channel_swap_shift: Shift value of channel swap. * @channel_swap_shift: Shift value of channel swap.
* @yuv422_en_bit: Enable bit of yuv422. * @yuv422_en_bit: Enable bit of yuv422.
* @csc_enable_bit: Enable bit of CSC. * @csc_enable_bit: Enable bit of CSC.
* @input_2p_en_bit: Enable bit for input two pixel per round feature.
* If present, implies that the feature must be enabled.
* @pixels_per_iter: Quantity of transferred pixels per iteration. * @pixels_per_iter: Quantity of transferred pixels per iteration.
* @edge_cfg_in_mmsys: If the edge configuration for DPI's output needs to be set in MMSYS. * @edge_cfg_in_mmsys: If the edge configuration for DPI's output needs to be set in MMSYS.
* @clocked_by_hdmi: HDMI IP outputs clock to dpi_pixel_clk input clock, needed
* for DPI registers access.
* @output_1pixel: Enable outputting one pixel per round; if the input is two pixel per
* round, the DPI hardware will internally transform it to 1T1P.
*/ */
struct mtk_dpi_conf { struct mtk_dpi_conf {
unsigned int (*cal_factor)(int clock); const struct mtk_dpi_factor *dpi_factor;
const u8 num_dpi_factor;
u32 reg_h_fre_con; u32 reg_h_fre_con;
u32 max_clock_khz; u32 max_clock_khz;
bool edge_sel_en; bool edge_sel_en;
...@@ -148,14 +161,16 @@ struct mtk_dpi_conf { ...@@ -148,14 +161,16 @@ struct mtk_dpi_conf {
bool is_ck_de_pol; bool is_ck_de_pol;
bool swap_input_support; bool swap_input_support;
bool support_direct_pin; bool support_direct_pin;
bool input_2pixel;
u32 dimension_mask; u32 dimension_mask;
u32 hvsize_mask; u32 hvsize_mask;
u32 channel_swap_shift; u32 channel_swap_shift;
u32 yuv422_en_bit; u32 yuv422_en_bit;
u32 csc_enable_bit; u32 csc_enable_bit;
u32 input_2p_en_bit;
u32 pixels_per_iter; u32 pixels_per_iter;
bool edge_cfg_in_mmsys; bool edge_cfg_in_mmsys;
bool clocked_by_hdmi;
bool output_1pixel;
}; };
static void mtk_dpi_mask(struct mtk_dpi *dpi, u32 offset, u32 val, u32 mask) static void mtk_dpi_mask(struct mtk_dpi *dpi, u32 offset, u32 val, u32 mask)
...@@ -166,6 +181,18 @@ static void mtk_dpi_mask(struct mtk_dpi *dpi, u32 offset, u32 val, u32 mask) ...@@ -166,6 +181,18 @@ static void mtk_dpi_mask(struct mtk_dpi *dpi, u32 offset, u32 val, u32 mask)
writel(tmp, dpi->regs + offset); writel(tmp, dpi->regs + offset);
} }
static void mtk_dpi_test_pattern_en(struct mtk_dpi *dpi, u8 type, bool enable)
{
u32 val;
if (enable)
val = FIELD_PREP(DPI_PAT_SEL, type) | DPI_PAT_EN;
else
val = 0;
mtk_dpi_mask(dpi, DPI_PATTERN0, val, DPI_PAT_SEL | DPI_PAT_EN);
}
static void mtk_dpi_sw_reset(struct mtk_dpi *dpi, bool reset) static void mtk_dpi_sw_reset(struct mtk_dpi *dpi, bool reset)
{ {
mtk_dpi_mask(dpi, DPI_RET, reset ? RST : 0, RST); mtk_dpi_mask(dpi, DPI_RET, reset ? RST : 0, RST);
...@@ -410,12 +437,13 @@ static void mtk_dpi_config_swap_input(struct mtk_dpi *dpi, bool enable) ...@@ -410,12 +437,13 @@ static void mtk_dpi_config_swap_input(struct mtk_dpi *dpi, bool enable)
static void mtk_dpi_config_2n_h_fre(struct mtk_dpi *dpi) static void mtk_dpi_config_2n_h_fre(struct mtk_dpi *dpi)
{ {
mtk_dpi_mask(dpi, dpi->conf->reg_h_fre_con, H_FRE_2N, H_FRE_2N); if (dpi->conf->reg_h_fre_con)
mtk_dpi_mask(dpi, dpi->conf->reg_h_fre_con, H_FRE_2N, H_FRE_2N);
} }
static void mtk_dpi_config_disable_edge(struct mtk_dpi *dpi) static void mtk_dpi_config_disable_edge(struct mtk_dpi *dpi)
{ {
if (dpi->conf->edge_sel_en) if (dpi->conf->edge_sel_en && dpi->conf->reg_h_fre_con)
mtk_dpi_mask(dpi, dpi->conf->reg_h_fre_con, 0, EDGE_SEL_EN); mtk_dpi_mask(dpi, dpi->conf->reg_h_fre_con, 0, EDGE_SEL_EN);
} }
...@@ -471,6 +499,7 @@ static void mtk_dpi_power_off(struct mtk_dpi *dpi) ...@@ -471,6 +499,7 @@ static void mtk_dpi_power_off(struct mtk_dpi *dpi)
mtk_dpi_disable(dpi); mtk_dpi_disable(dpi);
clk_disable_unprepare(dpi->pixel_clk); clk_disable_unprepare(dpi->pixel_clk);
clk_disable_unprepare(dpi->tvd_clk);
clk_disable_unprepare(dpi->engine_clk); clk_disable_unprepare(dpi->engine_clk);
} }
...@@ -487,6 +516,12 @@ static int mtk_dpi_power_on(struct mtk_dpi *dpi) ...@@ -487,6 +516,12 @@ static int mtk_dpi_power_on(struct mtk_dpi *dpi)
goto err_refcount; goto err_refcount;
} }
ret = clk_prepare_enable(dpi->tvd_clk);
if (ret) {
dev_err(dpi->dev, "Failed to enable tvd pll: %d\n", ret);
goto err_engine;
}
ret = clk_prepare_enable(dpi->pixel_clk); ret = clk_prepare_enable(dpi->pixel_clk);
if (ret) { if (ret) {
dev_err(dpi->dev, "Failed to enable pixel clock: %d\n", ret); dev_err(dpi->dev, "Failed to enable pixel clock: %d\n", ret);
...@@ -496,32 +531,39 @@ static int mtk_dpi_power_on(struct mtk_dpi *dpi) ...@@ -496,32 +531,39 @@ static int mtk_dpi_power_on(struct mtk_dpi *dpi)
return 0; return 0;
err_pixel: err_pixel:
clk_disable_unprepare(dpi->tvd_clk);
err_engine:
clk_disable_unprepare(dpi->engine_clk); clk_disable_unprepare(dpi->engine_clk);
err_refcount: err_refcount:
dpi->refcount--; dpi->refcount--;
return ret; return ret;
} }
static int mtk_dpi_set_display_mode(struct mtk_dpi *dpi, static unsigned int mtk_dpi_calculate_factor(struct mtk_dpi *dpi, int mode_clk)
struct drm_display_mode *mode) {
const struct mtk_dpi_factor *dpi_factor = dpi->conf->dpi_factor;
int i;
for (i = 0; i < dpi->conf->num_dpi_factor; i++) {
if (mode_clk <= dpi_factor[i].clock)
return dpi_factor[i].factor;
}
/* If no match try the lowest possible factor */
return dpi_factor[dpi->conf->num_dpi_factor - 1].factor;
}
static void mtk_dpi_set_pixel_clk(struct mtk_dpi *dpi, struct videomode *vm, int mode_clk)
{ {
struct mtk_dpi_polarities dpi_pol;
struct mtk_dpi_sync_param hsync;
struct mtk_dpi_sync_param vsync_lodd = { 0 };
struct mtk_dpi_sync_param vsync_leven = { 0 };
struct mtk_dpi_sync_param vsync_rodd = { 0 };
struct mtk_dpi_sync_param vsync_reven = { 0 };
struct videomode vm = { 0 };
unsigned long pll_rate; unsigned long pll_rate;
unsigned int factor; unsigned int factor;
/* let pll_rate can fix the valid range of tvdpll (1G~2GHz) */ /* let pll_rate can fix the valid range of tvdpll (1G~2GHz) */
factor = dpi->conf->cal_factor(mode->clock); factor = mtk_dpi_calculate_factor(dpi, mode_clk);
drm_display_mode_to_videomode(mode, &vm); pll_rate = vm->pixelclock * factor;
pll_rate = vm.pixelclock * factor;
dev_dbg(dpi->dev, "Want PLL %lu Hz, pixel clock %lu Hz\n", dev_dbg(dpi->dev, "Want PLL %lu Hz, pixel clock %lu Hz\n",
pll_rate, vm.pixelclock); pll_rate, vm->pixelclock);
clk_set_rate(dpi->tvd_clk, pll_rate); clk_set_rate(dpi->tvd_clk, pll_rate);
pll_rate = clk_get_rate(dpi->tvd_clk); pll_rate = clk_get_rate(dpi->tvd_clk);
...@@ -531,20 +573,36 @@ static int mtk_dpi_set_display_mode(struct mtk_dpi *dpi, ...@@ -531,20 +573,36 @@ static int mtk_dpi_set_display_mode(struct mtk_dpi *dpi,
* pixels for each iteration: divide the clock by this number and * pixels for each iteration: divide the clock by this number and
* adjust the display porches accordingly. * adjust the display porches accordingly.
*/ */
vm.pixelclock = pll_rate / factor; vm->pixelclock = pll_rate / factor;
vm.pixelclock /= dpi->conf->pixels_per_iter; vm->pixelclock /= dpi->conf->pixels_per_iter;
if ((dpi->output_fmt == MEDIA_BUS_FMT_RGB888_2X12_LE) || if ((dpi->output_fmt == MEDIA_BUS_FMT_RGB888_2X12_LE) ||
(dpi->output_fmt == MEDIA_BUS_FMT_RGB888_2X12_BE)) (dpi->output_fmt == MEDIA_BUS_FMT_RGB888_2X12_BE))
clk_set_rate(dpi->pixel_clk, vm.pixelclock * 2); clk_set_rate(dpi->pixel_clk, vm->pixelclock * 2);
else else
clk_set_rate(dpi->pixel_clk, vm.pixelclock); clk_set_rate(dpi->pixel_clk, vm->pixelclock);
vm.pixelclock = clk_get_rate(dpi->pixel_clk); vm->pixelclock = clk_get_rate(dpi->pixel_clk);
dev_dbg(dpi->dev, "Got PLL %lu Hz, pixel clock %lu Hz\n", dev_dbg(dpi->dev, "Got PLL %lu Hz, pixel clock %lu Hz\n",
pll_rate, vm.pixelclock); pll_rate, vm->pixelclock);
}
static int mtk_dpi_set_display_mode(struct mtk_dpi *dpi,
struct drm_display_mode *mode)
{
struct mtk_dpi_polarities dpi_pol;
struct mtk_dpi_sync_param hsync;
struct mtk_dpi_sync_param vsync_lodd = { 0 };
struct mtk_dpi_sync_param vsync_leven = { 0 };
struct mtk_dpi_sync_param vsync_rodd = { 0 };
struct mtk_dpi_sync_param vsync_reven = { 0 };
struct videomode vm = { 0 };
drm_display_mode_to_videomode(mode, &vm);
if (!dpi->conf->clocked_by_hdmi)
mtk_dpi_set_pixel_clk(dpi, &vm, mode->clock);
dpi_pol.ck_pol = MTK_DPI_POLARITY_FALLING; dpi_pol.ck_pol = MTK_DPI_POLARITY_FALLING;
dpi_pol.de_pol = MTK_DPI_POLARITY_RISING; dpi_pol.de_pol = MTK_DPI_POLARITY_RISING;
...@@ -607,12 +665,18 @@ static int mtk_dpi_set_display_mode(struct mtk_dpi *dpi, ...@@ -607,12 +665,18 @@ static int mtk_dpi_set_display_mode(struct mtk_dpi *dpi,
if (dpi->conf->support_direct_pin) { if (dpi->conf->support_direct_pin) {
mtk_dpi_config_yc_map(dpi, dpi->yc_map); mtk_dpi_config_yc_map(dpi, dpi->yc_map);
mtk_dpi_config_2n_h_fre(dpi); mtk_dpi_config_2n_h_fre(dpi);
mtk_dpi_dual_edge(dpi);
/* DPI can connect to either an external bridge or the internal HDMI encoder */
if (dpi->conf->output_1pixel)
mtk_dpi_mask(dpi, DPI_CON, DPI_OUTPUT_1T1P_EN, DPI_OUTPUT_1T1P_EN);
else
mtk_dpi_dual_edge(dpi);
mtk_dpi_config_disable_edge(dpi); mtk_dpi_config_disable_edge(dpi);
} }
if (dpi->conf->input_2pixel) { if (dpi->conf->input_2p_en_bit) {
mtk_dpi_mask(dpi, DPI_CON, DPINTF_INPUT_2P_EN, mtk_dpi_mask(dpi, DPI_CON, dpi->conf->input_2p_en_bit,
DPINTF_INPUT_2P_EN); dpi->conf->input_2p_en_bit);
} }
mtk_dpi_sw_reset(dpi, false); mtk_dpi_sw_reset(dpi, false);
...@@ -767,6 +831,99 @@ mtk_dpi_bridge_mode_valid(struct drm_bridge *bridge, ...@@ -767,6 +831,99 @@ mtk_dpi_bridge_mode_valid(struct drm_bridge *bridge,
return MODE_OK; return MODE_OK;
} }
static int mtk_dpi_debug_tp_show(struct seq_file *m, void *arg)
{
struct mtk_dpi *dpi = m->private;
bool en;
u32 val;
if (!dpi)
return -EINVAL;
val = readl(dpi->regs + DPI_PATTERN0);
en = val & DPI_PAT_EN;
val = FIELD_GET(DPI_PAT_SEL, val);
seq_printf(m, "DPI Test Pattern: %s\n", en ? "Enabled" : "Disabled");
if (en) {
seq_printf(m, "Internal pattern %d: ", val);
switch (val) {
case 0:
seq_puts(m, "256 Vertical Gray\n");
break;
case 1:
seq_puts(m, "1024 Vertical Gray\n");
break;
case 2:
seq_puts(m, "256 Horizontal Gray\n");
break;
case 3:
seq_puts(m, "1024 Horizontal Gray\n");
break;
case 4:
seq_puts(m, "Vertical Color bars\n");
break;
case 6:
seq_puts(m, "Frame border\n");
break;
case 7:
seq_puts(m, "Dot moire\n");
break;
default:
seq_puts(m, "Invalid selection\n");
break;
}
}
return 0;
}
static ssize_t mtk_dpi_debug_tp_write(struct file *file, const char __user *ubuf,
size_t len, loff_t *offp)
{
struct seq_file *m = file->private_data;
u32 en, type;
char buf[6];
if (!m || !m->private || *offp || len > sizeof(buf) - 1)
return -EINVAL;
memset(buf, 0, sizeof(buf));
if (copy_from_user(buf, ubuf, len))
return -EFAULT;
if (sscanf(buf, "%u %u", &en, &type) != 2)
return -EINVAL;
if (en < 0 || en > 1 || type < 0 || type > 7)
return -EINVAL;
mtk_dpi_test_pattern_en((struct mtk_dpi *)m->private, type, en);
return len;
}
static int mtk_dpi_debug_tp_open(struct inode *inode, struct file *file)
{
return single_open(file, mtk_dpi_debug_tp_show, inode->i_private);
}
static const struct file_operations mtk_dpi_debug_tp_fops = {
.owner = THIS_MODULE,
.open = mtk_dpi_debug_tp_open,
.read = seq_read,
.write = mtk_dpi_debug_tp_write,
.llseek = seq_lseek,
.release = single_release,
};
static void mtk_dpi_debugfs_init(struct drm_bridge *bridge, struct dentry *root)
{
struct mtk_dpi *dpi = bridge_to_dpi(bridge);
debugfs_create_file("dpi_test_pattern", 0640, root, dpi, &mtk_dpi_debug_tp_fops);
}
static const struct drm_bridge_funcs mtk_dpi_bridge_funcs = { static const struct drm_bridge_funcs mtk_dpi_bridge_funcs = {
.attach = mtk_dpi_bridge_attach, .attach = mtk_dpi_bridge_attach,
.mode_set = mtk_dpi_bridge_mode_set, .mode_set = mtk_dpi_bridge_mode_set,
...@@ -779,20 +936,23 @@ static const struct drm_bridge_funcs mtk_dpi_bridge_funcs = { ...@@ -779,20 +936,23 @@ static const struct drm_bridge_funcs mtk_dpi_bridge_funcs = {
.atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_bridge_destroy_state, .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,
.atomic_reset = drm_atomic_helper_bridge_reset, .atomic_reset = drm_atomic_helper_bridge_reset,
.debugfs_init = mtk_dpi_debugfs_init,
}; };
void mtk_dpi_start(struct device *dev) void mtk_dpi_start(struct device *dev)
{ {
struct mtk_dpi *dpi = dev_get_drvdata(dev); struct mtk_dpi *dpi = dev_get_drvdata(dev);
mtk_dpi_power_on(dpi); if (!dpi->conf->clocked_by_hdmi)
mtk_dpi_power_on(dpi);
} }
void mtk_dpi_stop(struct device *dev) void mtk_dpi_stop(struct device *dev)
{ {
struct mtk_dpi *dpi = dev_get_drvdata(dev); struct mtk_dpi *dpi = dev_get_drvdata(dev);
mtk_dpi_power_off(dpi); if (!dpi->conf->clocked_by_hdmi)
mtk_dpi_power_off(dpi);
} }
unsigned int mtk_dpi_encoder_index(struct device *dev) unsigned int mtk_dpi_encoder_index(struct device *dev)
...@@ -857,48 +1017,6 @@ static const struct component_ops mtk_dpi_component_ops = { ...@@ -857,48 +1017,6 @@ static const struct component_ops mtk_dpi_component_ops = {
.unbind = mtk_dpi_unbind, .unbind = mtk_dpi_unbind,
}; };
static unsigned int mt8173_calculate_factor(int clock)
{
if (clock <= 27000)
return 3 << 4;
else if (clock <= 84000)
return 3 << 3;
else if (clock <= 167000)
return 3 << 2;
else
return 3 << 1;
}
static unsigned int mt2701_calculate_factor(int clock)
{
if (clock <= 64000)
return 4;
else if (clock <= 128000)
return 2;
else
return 1;
}
static unsigned int mt8183_calculate_factor(int clock)
{
if (clock <= 27000)
return 8;
else if (clock <= 167000)
return 4;
else
return 2;
}
static unsigned int mt8195_dpintf_calculate_factor(int clock)
{
if (clock < 70000)
return 4;
else if (clock < 200000)
return 2;
else
return 1;
}
static const u32 mt8173_output_fmts[] = { static const u32 mt8173_output_fmts[] = {
MEDIA_BUS_FMT_RGB888_1X24, MEDIA_BUS_FMT_RGB888_1X24,
}; };
...@@ -913,8 +1031,25 @@ static const u32 mt8195_output_fmts[] = { ...@@ -913,8 +1031,25 @@ static const u32 mt8195_output_fmts[] = {
MEDIA_BUS_FMT_YUYV8_1X16, MEDIA_BUS_FMT_YUYV8_1X16,
}; };
static const struct mtk_dpi_factor dpi_factor_mt2701[] = {
{ 64000, 4 }, { 128000, 2 }, { U32_MAX, 1 }
};
static const struct mtk_dpi_factor dpi_factor_mt8173[] = {
{ 27000, 48 }, { 84000, 24 }, { 167000, 12 }, { U32_MAX, 6 }
};
static const struct mtk_dpi_factor dpi_factor_mt8183[] = {
{ 27000, 8 }, { 167000, 4 }, { U32_MAX, 2 }
};
static const struct mtk_dpi_factor dpi_factor_mt8195_dp_intf[] = {
{ 70000 - 1, 4 }, { 200000 - 1, 2 }, { U32_MAX, 1 }
};
static const struct mtk_dpi_conf mt8173_conf = { static const struct mtk_dpi_conf mt8173_conf = {
.cal_factor = mt8173_calculate_factor, .dpi_factor = dpi_factor_mt8173,
.num_dpi_factor = ARRAY_SIZE(dpi_factor_mt8173),
.reg_h_fre_con = 0xe0, .reg_h_fre_con = 0xe0,
.max_clock_khz = 300000, .max_clock_khz = 300000,
.output_fmts = mt8173_output_fmts, .output_fmts = mt8173_output_fmts,
...@@ -931,7 +1066,8 @@ static const struct mtk_dpi_conf mt8173_conf = { ...@@ -931,7 +1066,8 @@ static const struct mtk_dpi_conf mt8173_conf = {
}; };
static const struct mtk_dpi_conf mt2701_conf = { static const struct mtk_dpi_conf mt2701_conf = {
.cal_factor = mt2701_calculate_factor, .dpi_factor = dpi_factor_mt2701,
.num_dpi_factor = ARRAY_SIZE(dpi_factor_mt2701),
.reg_h_fre_con = 0xb0, .reg_h_fre_con = 0xb0,
.edge_sel_en = true, .edge_sel_en = true,
.max_clock_khz = 150000, .max_clock_khz = 150000,
...@@ -949,7 +1085,8 @@ static const struct mtk_dpi_conf mt2701_conf = { ...@@ -949,7 +1085,8 @@ static const struct mtk_dpi_conf mt2701_conf = {
}; };
static const struct mtk_dpi_conf mt8183_conf = { static const struct mtk_dpi_conf mt8183_conf = {
.cal_factor = mt8183_calculate_factor, .dpi_factor = dpi_factor_mt8183,
.num_dpi_factor = ARRAY_SIZE(dpi_factor_mt8183),
.reg_h_fre_con = 0xe0, .reg_h_fre_con = 0xe0,
.max_clock_khz = 100000, .max_clock_khz = 100000,
.output_fmts = mt8183_output_fmts, .output_fmts = mt8183_output_fmts,
...@@ -966,7 +1103,8 @@ static const struct mtk_dpi_conf mt8183_conf = { ...@@ -966,7 +1103,8 @@ static const struct mtk_dpi_conf mt8183_conf = {
}; };
static const struct mtk_dpi_conf mt8186_conf = { static const struct mtk_dpi_conf mt8186_conf = {
.cal_factor = mt8183_calculate_factor, .dpi_factor = dpi_factor_mt8183,
.num_dpi_factor = ARRAY_SIZE(dpi_factor_mt8183),
.reg_h_fre_con = 0xe0, .reg_h_fre_con = 0xe0,
.max_clock_khz = 150000, .max_clock_khz = 150000,
.output_fmts = mt8183_output_fmts, .output_fmts = mt8183_output_fmts,
...@@ -984,7 +1122,8 @@ static const struct mtk_dpi_conf mt8186_conf = { ...@@ -984,7 +1122,8 @@ static const struct mtk_dpi_conf mt8186_conf = {
}; };
static const struct mtk_dpi_conf mt8192_conf = { static const struct mtk_dpi_conf mt8192_conf = {
.cal_factor = mt8183_calculate_factor, .dpi_factor = dpi_factor_mt8183,
.num_dpi_factor = ARRAY_SIZE(dpi_factor_mt8183),
.reg_h_fre_con = 0xe0, .reg_h_fre_con = 0xe0,
.max_clock_khz = 150000, .max_clock_khz = 150000,
.output_fmts = mt8183_output_fmts, .output_fmts = mt8183_output_fmts,
...@@ -1000,18 +1139,37 @@ static const struct mtk_dpi_conf mt8192_conf = { ...@@ -1000,18 +1139,37 @@ static const struct mtk_dpi_conf mt8192_conf = {
.csc_enable_bit = CSC_ENABLE, .csc_enable_bit = CSC_ENABLE,
}; };
static const struct mtk_dpi_conf mt8195_conf = {
.max_clock_khz = 594000,
.output_fmts = mt8183_output_fmts,
.num_output_fmts = ARRAY_SIZE(mt8183_output_fmts),
.pixels_per_iter = 1,
.is_ck_de_pol = true,
.swap_input_support = true,
.support_direct_pin = true,
.dimension_mask = HPW_MASK,
.hvsize_mask = HSIZE_MASK,
.channel_swap_shift = CH_SWAP,
.yuv422_en_bit = YUV422_EN,
.csc_enable_bit = CSC_ENABLE,
.input_2p_en_bit = DPI_INPUT_2P_EN,
.clocked_by_hdmi = true,
.output_1pixel = true,
};
static const struct mtk_dpi_conf mt8195_dpintf_conf = { static const struct mtk_dpi_conf mt8195_dpintf_conf = {
.cal_factor = mt8195_dpintf_calculate_factor, .dpi_factor = dpi_factor_mt8195_dp_intf,
.num_dpi_factor = ARRAY_SIZE(dpi_factor_mt8195_dp_intf),
.max_clock_khz = 600000, .max_clock_khz = 600000,
.output_fmts = mt8195_output_fmts, .output_fmts = mt8195_output_fmts,
.num_output_fmts = ARRAY_SIZE(mt8195_output_fmts), .num_output_fmts = ARRAY_SIZE(mt8195_output_fmts),
.pixels_per_iter = 4, .pixels_per_iter = 4,
.input_2pixel = true,
.dimension_mask = DPINTF_HPW_MASK, .dimension_mask = DPINTF_HPW_MASK,
.hvsize_mask = DPINTF_HSIZE_MASK, .hvsize_mask = DPINTF_HSIZE_MASK,
.channel_swap_shift = DPINTF_CH_SWAP, .channel_swap_shift = DPINTF_CH_SWAP,
.yuv422_en_bit = DPINTF_YUV422_EN, .yuv422_en_bit = DPINTF_YUV422_EN,
.csc_enable_bit = DPINTF_CSC_ENABLE, .csc_enable_bit = DPINTF_CSC_ENABLE,
.input_2p_en_bit = DPINTF_INPUT_2P_EN,
}; };
static int mtk_dpi_probe(struct platform_device *pdev) static int mtk_dpi_probe(struct platform_device *pdev)
...@@ -1102,6 +1260,7 @@ static const struct of_device_id mtk_dpi_of_ids[] = { ...@@ -1102,6 +1260,7 @@ static const struct of_device_id mtk_dpi_of_ids[] = {
{ .compatible = "mediatek,mt8188-dp-intf", .data = &mt8195_dpintf_conf }, { .compatible = "mediatek,mt8188-dp-intf", .data = &mt8195_dpintf_conf },
{ .compatible = "mediatek,mt8192-dpi", .data = &mt8192_conf }, { .compatible = "mediatek,mt8192-dpi", .data = &mt8192_conf },
{ .compatible = "mediatek,mt8195-dp-intf", .data = &mt8195_dpintf_conf }, { .compatible = "mediatek,mt8195-dp-intf", .data = &mt8195_dpintf_conf },
{ .compatible = "mediatek,mt8195-dpi", .data = &mt8195_conf },
{ /* sentinel */ }, { /* sentinel */ },
}; };
MODULE_DEVICE_TABLE(of, mtk_dpi_of_ids); MODULE_DEVICE_TABLE(of, mtk_dpi_of_ids);
......
...@@ -40,6 +40,11 @@ ...@@ -40,6 +40,11 @@
#define FAKE_DE_LEVEN BIT(21) #define FAKE_DE_LEVEN BIT(21)
#define FAKE_DE_RODD BIT(22) #define FAKE_DE_RODD BIT(22)
#define FAKE_DE_REVEN BIT(23) #define FAKE_DE_REVEN BIT(23)
/* DPI_CON: DPI instances */
#define DPI_OUTPUT_1T1P_EN BIT(24)
#define DPI_INPUT_2P_EN BIT(25)
/* DPI_CON: DPINTF instances */
#define DPINTF_YUV422_EN BIT(24) #define DPINTF_YUV422_EN BIT(24)
#define DPINTF_CSC_ENABLE BIT(26) #define DPINTF_CSC_ENABLE BIT(26)
#define DPINTF_INPUT_2P_EN BIT(29) #define DPINTF_INPUT_2P_EN BIT(29)
...@@ -235,4 +240,8 @@ ...@@ -235,4 +240,8 @@
#define MATRIX_SEL_RGB_TO_JPEG 0 #define MATRIX_SEL_RGB_TO_JPEG 0
#define MATRIX_SEL_RGB_TO_BT601 2 #define MATRIX_SEL_RGB_TO_BT601 2
#define DPI_PATTERN0 0xf00
#define DPI_PAT_EN BIT(0)
#define DPI_PAT_SEL GENMASK(6, 4)
#endif /* __MTK_DPI_REGS_H */ #endif /* __MTK_DPI_REGS_H */
...@@ -327,6 +327,10 @@ static const struct mtk_mmsys_driver_data mt8195_vdosys1_driver_data = { ...@@ -327,6 +327,10 @@ static const struct mtk_mmsys_driver_data mt8195_vdosys1_driver_data = {
.min_height = 1, .min_height = 1,
}; };
static const struct mtk_mmsys_driver_data mt8365_mmsys_driver_data = {
.mmsys_dev_num = 1,
};
static const struct of_device_id mtk_drm_of_ids[] = { static const struct of_device_id mtk_drm_of_ids[] = {
{ .compatible = "mediatek,mt2701-mmsys", { .compatible = "mediatek,mt2701-mmsys",
.data = &mt2701_mmsys_driver_data}, .data = &mt2701_mmsys_driver_data},
...@@ -354,6 +358,8 @@ static const struct of_device_id mtk_drm_of_ids[] = { ...@@ -354,6 +358,8 @@ static const struct of_device_id mtk_drm_of_ids[] = {
.data = &mt8195_vdosys0_driver_data}, .data = &mt8195_vdosys0_driver_data},
{ .compatible = "mediatek,mt8195-vdosys1", { .compatible = "mediatek,mt8195-vdosys1",
.data = &mt8195_vdosys1_driver_data}, .data = &mt8195_vdosys1_driver_data},
{ .compatible = "mediatek,mt8365-mmsys",
.data = &mt8365_mmsys_driver_data},
{ } { }
}; };
MODULE_DEVICE_TABLE(of, mtk_drm_of_ids); MODULE_DEVICE_TABLE(of, mtk_drm_of_ids);
...@@ -754,6 +760,8 @@ static const struct of_device_id mtk_ddp_comp_dt_ids[] = { ...@@ -754,6 +760,8 @@ static const struct of_device_id mtk_ddp_comp_dt_ids[] = {
.data = (void *)MTK_DISP_MUTEX }, .data = (void *)MTK_DISP_MUTEX },
{ .compatible = "mediatek,mt8195-disp-mutex", { .compatible = "mediatek,mt8195-disp-mutex",
.data = (void *)MTK_DISP_MUTEX }, .data = (void *)MTK_DISP_MUTEX },
{ .compatible = "mediatek,mt8365-disp-mutex",
.data = (void *)MTK_DISP_MUTEX },
{ .compatible = "mediatek,mt8173-disp-od", { .compatible = "mediatek,mt8173-disp-od",
.data = (void *)MTK_DISP_OD }, .data = (void *)MTK_DISP_OD },
{ .compatible = "mediatek,mt2701-disp-ovl", { .compatible = "mediatek,mt2701-disp-ovl",
...@@ -810,6 +818,8 @@ static const struct of_device_id mtk_ddp_comp_dt_ids[] = { ...@@ -810,6 +818,8 @@ static const struct of_device_id mtk_ddp_comp_dt_ids[] = {
.data = (void *)MTK_DPI }, .data = (void *)MTK_DPI },
{ .compatible = "mediatek,mt8195-dp-intf", { .compatible = "mediatek,mt8195-dp-intf",
.data = (void *)MTK_DP_INTF }, .data = (void *)MTK_DP_INTF },
{ .compatible = "mediatek,mt8195-dpi",
.data = (void *)MTK_DPI },
{ .compatible = "mediatek,mt2701-dsi", { .compatible = "mediatek,mt2701-dsi",
.data = (void *)MTK_DSI }, .data = (void *)MTK_DSI },
{ .compatible = "mediatek,mt8173-dsi", { .compatible = "mediatek,mt8173-dsi",
......
...@@ -1116,12 +1116,12 @@ static ssize_t mtk_dsi_host_transfer(struct mipi_dsi_host *host, ...@@ -1116,12 +1116,12 @@ static ssize_t mtk_dsi_host_transfer(struct mipi_dsi_host *host,
const struct mipi_dsi_msg *msg) const struct mipi_dsi_msg *msg)
{ {
struct mtk_dsi *dsi = host_to_dsi(host); struct mtk_dsi *dsi = host_to_dsi(host);
u32 recv_cnt, i; ssize_t recv_cnt;
u8 read_data[16]; u8 read_data[16];
void *src_addr; void *src_addr;
u8 irq_flag = CMD_DONE_INT_FLAG; u8 irq_flag = CMD_DONE_INT_FLAG;
u32 dsi_mode; u32 dsi_mode;
int ret; int ret, i;
dsi_mode = readl(dsi->regs + DSI_MODE_CTRL); dsi_mode = readl(dsi->regs + DSI_MODE_CTRL);
if (dsi_mode & MODE) { if (dsi_mode & MODE) {
...@@ -1170,7 +1170,7 @@ static ssize_t mtk_dsi_host_transfer(struct mipi_dsi_host *host, ...@@ -1170,7 +1170,7 @@ static ssize_t mtk_dsi_host_transfer(struct mipi_dsi_host *host,
if (recv_cnt) if (recv_cnt)
memcpy(msg->rx_buf, src_addr, recv_cnt); memcpy(msg->rx_buf, src_addr, recv_cnt);
DRM_INFO("dsi get %d byte data from the panel address(0x%x)\n", DRM_INFO("dsi get %zd byte data from the panel address(0x%x)\n",
recv_cnt, *((u8 *)(msg->tx_buf))); recv_cnt, *((u8 *)(msg->tx_buf)));
restore_dsi_mode: restore_dsi_mode:
......
...@@ -137,7 +137,7 @@ enum hdmi_aud_channel_swap_type { ...@@ -137,7 +137,7 @@ enum hdmi_aud_channel_swap_type {
struct hdmi_audio_param { struct hdmi_audio_param {
enum hdmi_audio_coding_type aud_codec; enum hdmi_audio_coding_type aud_codec;
enum hdmi_audio_sample_size aud_sampe_size; enum hdmi_audio_sample_size aud_sample_size;
enum hdmi_aud_input_type aud_input_type; enum hdmi_aud_input_type aud_input_type;
enum hdmi_aud_i2s_fmt aud_i2s_fmt; enum hdmi_aud_i2s_fmt aud_i2s_fmt;
enum hdmi_aud_mclk aud_mclk; enum hdmi_aud_mclk aud_mclk;
...@@ -163,16 +163,10 @@ struct mtk_hdmi { ...@@ -163,16 +163,10 @@ struct mtk_hdmi {
struct clk *clk[MTK_HDMI_CLK_COUNT]; struct clk *clk[MTK_HDMI_CLK_COUNT];
struct drm_display_mode mode; struct drm_display_mode mode;
bool dvi_mode; bool dvi_mode;
u32 min_clock;
u32 max_clock;
u32 max_hdisplay;
u32 max_vdisplay;
u32 ibias;
u32 ibias_up;
struct regmap *sys_regmap; struct regmap *sys_regmap;
unsigned int sys_offset; unsigned int sys_offset;
void __iomem *regs; void __iomem *regs;
enum hdmi_colorspace csp; struct platform_device *audio_pdev;
struct hdmi_audio_param aud_param; struct hdmi_audio_param aud_param;
bool audio_enable; bool audio_enable;
bool powered; bool powered;
...@@ -987,15 +981,14 @@ static int mtk_hdmi_setup_avi_infoframe(struct mtk_hdmi *hdmi, ...@@ -987,15 +981,14 @@ static int mtk_hdmi_setup_avi_infoframe(struct mtk_hdmi *hdmi,
return 0; return 0;
} }
static int mtk_hdmi_setup_spd_infoframe(struct mtk_hdmi *hdmi, static int mtk_hdmi_setup_spd_infoframe(struct mtk_hdmi *hdmi)
const char *vendor,
const char *product)
{ {
struct drm_bridge *bridge = &hdmi->bridge;
struct hdmi_spd_infoframe frame; struct hdmi_spd_infoframe frame;
u8 buffer[HDMI_INFOFRAME_HEADER_SIZE + HDMI_SPD_INFOFRAME_SIZE]; u8 buffer[HDMI_INFOFRAME_HEADER_SIZE + HDMI_SPD_INFOFRAME_SIZE];
ssize_t err; ssize_t err;
err = hdmi_spd_infoframe_init(&frame, vendor, product); err = hdmi_spd_infoframe_init(&frame, bridge->vendor, bridge->product);
if (err < 0) { if (err < 0) {
dev_err(hdmi->dev, "Failed to initialize SPD infoframe: %zd\n", dev_err(hdmi->dev, "Failed to initialize SPD infoframe: %zd\n",
err); err);
...@@ -1072,9 +1065,8 @@ static int mtk_hdmi_output_init(struct mtk_hdmi *hdmi) ...@@ -1072,9 +1065,8 @@ static int mtk_hdmi_output_init(struct mtk_hdmi *hdmi)
{ {
struct hdmi_audio_param *aud_param = &hdmi->aud_param; struct hdmi_audio_param *aud_param = &hdmi->aud_param;
hdmi->csp = HDMI_COLORSPACE_RGB;
aud_param->aud_codec = HDMI_AUDIO_CODING_TYPE_PCM; aud_param->aud_codec = HDMI_AUDIO_CODING_TYPE_PCM;
aud_param->aud_sampe_size = HDMI_AUDIO_SAMPLE_SIZE_16; aud_param->aud_sample_size = HDMI_AUDIO_SAMPLE_SIZE_16;
aud_param->aud_input_type = HDMI_AUD_INPUT_I2S; aud_param->aud_input_type = HDMI_AUD_INPUT_I2S;
aud_param->aud_i2s_fmt = HDMI_I2S_MODE_I2S_24BIT; aud_param->aud_i2s_fmt = HDMI_I2S_MODE_I2S_24BIT;
aud_param->aud_mclk = HDMI_AUD_MCLK_128FS; aud_param->aud_mclk = HDMI_AUD_MCLK_128FS;
...@@ -1167,13 +1159,12 @@ static int mtk_hdmi_clk_enable_audio(struct mtk_hdmi *hdmi) ...@@ -1167,13 +1159,12 @@ static int mtk_hdmi_clk_enable_audio(struct mtk_hdmi *hdmi)
return ret; return ret;
ret = clk_prepare_enable(hdmi->clk[MTK_HDMI_CLK_AUD_SPDIF]); ret = clk_prepare_enable(hdmi->clk[MTK_HDMI_CLK_AUD_SPDIF]);
if (ret) if (ret) {
goto err; clk_disable_unprepare(hdmi->clk[MTK_HDMI_CLK_AUD_BCLK]);
return ret;
}
return 0; return 0;
err:
clk_disable_unprepare(hdmi->clk[MTK_HDMI_CLK_AUD_BCLK]);
return ret;
} }
static void mtk_hdmi_clk_disable_audio(struct mtk_hdmi *hdmi) static void mtk_hdmi_clk_disable_audio(struct mtk_hdmi *hdmi)
...@@ -1377,7 +1368,7 @@ static void mtk_hdmi_send_infoframe(struct mtk_hdmi *hdmi, ...@@ -1377,7 +1368,7 @@ static void mtk_hdmi_send_infoframe(struct mtk_hdmi *hdmi,
{ {
mtk_hdmi_setup_audio_infoframe(hdmi); mtk_hdmi_setup_audio_infoframe(hdmi);
mtk_hdmi_setup_avi_infoframe(hdmi, mode); mtk_hdmi_setup_avi_infoframe(hdmi, mode);
mtk_hdmi_setup_spd_infoframe(hdmi, "mediatek", "On-chip HDMI"); mtk_hdmi_setup_spd_infoframe(hdmi);
if (mode->flags & DRM_MODE_FLAG_3D_MASK) if (mode->flags & DRM_MODE_FLAG_3D_MASK)
mtk_hdmi_setup_vendor_specific_infoframe(hdmi, mode); mtk_hdmi_setup_vendor_specific_infoframe(hdmi, mode);
} }
...@@ -1569,14 +1560,14 @@ static int mtk_hdmi_audio_hw_params(struct device *dev, void *data, ...@@ -1569,14 +1560,14 @@ static int mtk_hdmi_audio_hw_params(struct device *dev, void *data,
switch (daifmt->fmt) { switch (daifmt->fmt) {
case HDMI_I2S: case HDMI_I2S:
hdmi_params.aud_codec = HDMI_AUDIO_CODING_TYPE_PCM; hdmi_params.aud_codec = HDMI_AUDIO_CODING_TYPE_PCM;
hdmi_params.aud_sampe_size = HDMI_AUDIO_SAMPLE_SIZE_16; hdmi_params.aud_sample_size = HDMI_AUDIO_SAMPLE_SIZE_16;
hdmi_params.aud_input_type = HDMI_AUD_INPUT_I2S; hdmi_params.aud_input_type = HDMI_AUD_INPUT_I2S;
hdmi_params.aud_i2s_fmt = HDMI_I2S_MODE_I2S_24BIT; hdmi_params.aud_i2s_fmt = HDMI_I2S_MODE_I2S_24BIT;
hdmi_params.aud_mclk = HDMI_AUD_MCLK_128FS; hdmi_params.aud_mclk = HDMI_AUD_MCLK_128FS;
break; break;
case HDMI_SPDIF: case HDMI_SPDIF:
hdmi_params.aud_codec = HDMI_AUDIO_CODING_TYPE_PCM; hdmi_params.aud_codec = HDMI_AUDIO_CODING_TYPE_PCM;
hdmi_params.aud_sampe_size = HDMI_AUDIO_SAMPLE_SIZE_16; hdmi_params.aud_sample_size = HDMI_AUDIO_SAMPLE_SIZE_16;
hdmi_params.aud_input_type = HDMI_AUD_INPUT_SPDIF; hdmi_params.aud_input_type = HDMI_AUD_INPUT_SPDIF;
break; break;
default: default:
...@@ -1659,6 +1650,11 @@ static const struct hdmi_codec_ops mtk_hdmi_audio_codec_ops = { ...@@ -1659,6 +1650,11 @@ static const struct hdmi_codec_ops mtk_hdmi_audio_codec_ops = {
.hook_plugged_cb = mtk_hdmi_audio_hook_plugged_cb, .hook_plugged_cb = mtk_hdmi_audio_hook_plugged_cb,
}; };
static void mtk_hdmi_unregister_audio_driver(void *data)
{
platform_device_unregister(data);
}
static int mtk_hdmi_register_audio_driver(struct device *dev) static int mtk_hdmi_register_audio_driver(struct device *dev)
{ {
struct mtk_hdmi *hdmi = dev_get_drvdata(dev); struct mtk_hdmi *hdmi = dev_get_drvdata(dev);
...@@ -1669,15 +1665,21 @@ static int mtk_hdmi_register_audio_driver(struct device *dev) ...@@ -1669,15 +1665,21 @@ static int mtk_hdmi_register_audio_driver(struct device *dev)
.data = hdmi, .data = hdmi,
.no_capture_mute = 1, .no_capture_mute = 1,
}; };
struct platform_device *pdev; int ret;
pdev = platform_device_register_data(dev, HDMI_CODEC_DRV_NAME, hdmi->audio_pdev = platform_device_register_data(dev,
PLATFORM_DEVID_AUTO, &codec_data, HDMI_CODEC_DRV_NAME,
sizeof(codec_data)); PLATFORM_DEVID_AUTO,
if (IS_ERR(pdev)) &codec_data,
return PTR_ERR(pdev); sizeof(codec_data));
if (IS_ERR(hdmi->audio_pdev))
return PTR_ERR(hdmi->audio_pdev);
ret = devm_add_action_or_reset(dev, mtk_hdmi_unregister_audio_driver,
hdmi->audio_pdev);
if (ret)
return ret;
DRM_INFO("%s driver bound to HDMI\n", HDMI_CODEC_DRV_NAME);
return 0; return 0;
} }
...@@ -1721,14 +1723,17 @@ static int mtk_hdmi_probe(struct platform_device *pdev) ...@@ -1721,14 +1723,17 @@ static int mtk_hdmi_probe(struct platform_device *pdev)
hdmi->bridge.ops = DRM_BRIDGE_OP_DETECT | DRM_BRIDGE_OP_EDID hdmi->bridge.ops = DRM_BRIDGE_OP_DETECT | DRM_BRIDGE_OP_EDID
| DRM_BRIDGE_OP_HPD; | DRM_BRIDGE_OP_HPD;
hdmi->bridge.type = DRM_MODE_CONNECTOR_HDMIA; hdmi->bridge.type = DRM_MODE_CONNECTOR_HDMIA;
drm_bridge_add(&hdmi->bridge); hdmi->bridge.vendor = "MediaTek";
hdmi->bridge.product = "On-Chip HDMI";
ret = devm_drm_bridge_add(dev, &hdmi->bridge);
if (ret)
return dev_err_probe(dev, ret, "Failed to add bridge\n");
ret = mtk_hdmi_clk_enable_audio(hdmi); ret = mtk_hdmi_clk_enable_audio(hdmi);
if (ret) { if (ret)
drm_bridge_remove(&hdmi->bridge);
return dev_err_probe(dev, ret, return dev_err_probe(dev, ret,
"Failed to enable audio clocks\n"); "Failed to enable audio clocks\n");
}
return 0; return 0;
} }
...@@ -1737,12 +1742,10 @@ static void mtk_hdmi_remove(struct platform_device *pdev) ...@@ -1737,12 +1742,10 @@ static void mtk_hdmi_remove(struct platform_device *pdev)
{ {
struct mtk_hdmi *hdmi = platform_get_drvdata(pdev); struct mtk_hdmi *hdmi = platform_get_drvdata(pdev);
drm_bridge_remove(&hdmi->bridge);
mtk_hdmi_clk_disable_audio(hdmi); mtk_hdmi_clk_disable_audio(hdmi);
} }
#ifdef CONFIG_PM_SLEEP static __maybe_unused int mtk_hdmi_suspend(struct device *dev)
static int mtk_hdmi_suspend(struct device *dev)
{ {
struct mtk_hdmi *hdmi = dev_get_drvdata(dev); struct mtk_hdmi *hdmi = dev_get_drvdata(dev);
...@@ -1751,22 +1754,14 @@ static int mtk_hdmi_suspend(struct device *dev) ...@@ -1751,22 +1754,14 @@ static int mtk_hdmi_suspend(struct device *dev)
return 0; return 0;
} }
static int mtk_hdmi_resume(struct device *dev) static __maybe_unused int mtk_hdmi_resume(struct device *dev)
{ {
struct mtk_hdmi *hdmi = dev_get_drvdata(dev); struct mtk_hdmi *hdmi = dev_get_drvdata(dev);
int ret = 0;
ret = mtk_hdmi_clk_enable_audio(hdmi);
if (ret) {
dev_err(dev, "hdmi resume failed!\n");
return ret;
}
return 0; return mtk_hdmi_clk_enable_audio(hdmi);
} }
#endif
static SIMPLE_DEV_PM_OPS(mtk_hdmi_pm_ops, static SIMPLE_DEV_PM_OPS(mtk_hdmi_pm_ops, mtk_hdmi_suspend, mtk_hdmi_resume);
mtk_hdmi_suspend, mtk_hdmi_resume);
static const struct mtk_hdmi_conf mtk_hdmi_conf_mt2701 = { static const struct mtk_hdmi_conf mtk_hdmi_conf_mt2701 = {
.tz_disabled = true, .tz_disabled = true,
...@@ -1778,15 +1773,10 @@ static const struct mtk_hdmi_conf mtk_hdmi_conf_mt8167 = { ...@@ -1778,15 +1773,10 @@ static const struct mtk_hdmi_conf mtk_hdmi_conf_mt8167 = {
}; };
static const struct of_device_id mtk_hdmi_of_ids[] = { static const struct of_device_id mtk_hdmi_of_ids[] = {
{ .compatible = "mediatek,mt2701-hdmi", { .compatible = "mediatek,mt2701-hdmi", .data = &mtk_hdmi_conf_mt2701 },
.data = &mtk_hdmi_conf_mt2701, { .compatible = "mediatek,mt8167-hdmi", .data = &mtk_hdmi_conf_mt8167 },
}, { .compatible = "mediatek,mt8173-hdmi" },
{ .compatible = "mediatek,mt8167-hdmi", { /* sentinel */ }
.data = &mtk_hdmi_conf_mt8167,
},
{ .compatible = "mediatek,mt8173-hdmi",
},
{}
}; };
MODULE_DEVICE_TABLE(of, mtk_hdmi_of_ids); MODULE_DEVICE_TABLE(of, mtk_hdmi_of_ids);
......
...@@ -82,7 +82,7 @@ write_dpt_remapped(struct xe_bo *bo, struct iosys_map *map, u32 *dpt_ofs, ...@@ -82,7 +82,7 @@ write_dpt_remapped(struct xe_bo *bo, struct iosys_map *map, u32 *dpt_ofs,
static int __xe_pin_fb_vma_dpt(const struct intel_framebuffer *fb, static int __xe_pin_fb_vma_dpt(const struct intel_framebuffer *fb,
const struct i915_gtt_view *view, const struct i915_gtt_view *view,
struct i915_vma *vma, struct i915_vma *vma,
u64 physical_alignment) unsigned int alignment)
{ {
struct xe_device *xe = to_xe_device(fb->base.dev); struct xe_device *xe = to_xe_device(fb->base.dev);
struct xe_tile *tile0 = xe_device_get_root_tile(xe); struct xe_tile *tile0 = xe_device_get_root_tile(xe);
...@@ -108,7 +108,7 @@ static int __xe_pin_fb_vma_dpt(const struct intel_framebuffer *fb, ...@@ -108,7 +108,7 @@ static int __xe_pin_fb_vma_dpt(const struct intel_framebuffer *fb,
XE_BO_FLAG_VRAM0 | XE_BO_FLAG_VRAM0 |
XE_BO_FLAG_GGTT | XE_BO_FLAG_GGTT |
XE_BO_FLAG_PAGETABLE, XE_BO_FLAG_PAGETABLE,
physical_alignment); alignment);
else else
dpt = xe_bo_create_pin_map_at_aligned(xe, tile0, NULL, dpt = xe_bo_create_pin_map_at_aligned(xe, tile0, NULL,
dpt_size, ~0ull, dpt_size, ~0ull,
...@@ -116,7 +116,7 @@ static int __xe_pin_fb_vma_dpt(const struct intel_framebuffer *fb, ...@@ -116,7 +116,7 @@ static int __xe_pin_fb_vma_dpt(const struct intel_framebuffer *fb,
XE_BO_FLAG_STOLEN | XE_BO_FLAG_STOLEN |
XE_BO_FLAG_GGTT | XE_BO_FLAG_GGTT |
XE_BO_FLAG_PAGETABLE, XE_BO_FLAG_PAGETABLE,
physical_alignment); alignment);
if (IS_ERR(dpt)) if (IS_ERR(dpt))
dpt = xe_bo_create_pin_map_at_aligned(xe, tile0, NULL, dpt = xe_bo_create_pin_map_at_aligned(xe, tile0, NULL,
dpt_size, ~0ull, dpt_size, ~0ull,
...@@ -124,7 +124,7 @@ static int __xe_pin_fb_vma_dpt(const struct intel_framebuffer *fb, ...@@ -124,7 +124,7 @@ static int __xe_pin_fb_vma_dpt(const struct intel_framebuffer *fb,
XE_BO_FLAG_SYSTEM | XE_BO_FLAG_SYSTEM |
XE_BO_FLAG_GGTT | XE_BO_FLAG_GGTT |
XE_BO_FLAG_PAGETABLE, XE_BO_FLAG_PAGETABLE,
physical_alignment); alignment);
if (IS_ERR(dpt)) if (IS_ERR(dpt))
return PTR_ERR(dpt); return PTR_ERR(dpt);
...@@ -194,7 +194,7 @@ write_ggtt_rotated(struct xe_bo *bo, struct xe_ggtt *ggtt, u32 *ggtt_ofs, u32 bo ...@@ -194,7 +194,7 @@ write_ggtt_rotated(struct xe_bo *bo, struct xe_ggtt *ggtt, u32 *ggtt_ofs, u32 bo
static int __xe_pin_fb_vma_ggtt(const struct intel_framebuffer *fb, static int __xe_pin_fb_vma_ggtt(const struct intel_framebuffer *fb,
const struct i915_gtt_view *view, const struct i915_gtt_view *view,
struct i915_vma *vma, struct i915_vma *vma,
u64 physical_alignment) unsigned int alignment)
{ {
struct drm_gem_object *obj = intel_fb_bo(&fb->base); struct drm_gem_object *obj = intel_fb_bo(&fb->base);
struct xe_bo *bo = gem_to_xe_bo(obj); struct xe_bo *bo = gem_to_xe_bo(obj);
...@@ -277,7 +277,7 @@ static int __xe_pin_fb_vma_ggtt(const struct intel_framebuffer *fb, ...@@ -277,7 +277,7 @@ static int __xe_pin_fb_vma_ggtt(const struct intel_framebuffer *fb,
static struct i915_vma *__xe_pin_fb_vma(const struct intel_framebuffer *fb, static struct i915_vma *__xe_pin_fb_vma(const struct intel_framebuffer *fb,
const struct i915_gtt_view *view, const struct i915_gtt_view *view,
u64 physical_alignment) unsigned int alignment)
{ {
struct drm_device *dev = fb->base.dev; struct drm_device *dev = fb->base.dev;
struct xe_device *xe = to_xe_device(dev); struct xe_device *xe = to_xe_device(dev);
...@@ -327,9 +327,9 @@ static struct i915_vma *__xe_pin_fb_vma(const struct intel_framebuffer *fb, ...@@ -327,9 +327,9 @@ static struct i915_vma *__xe_pin_fb_vma(const struct intel_framebuffer *fb,
vma->bo = bo; vma->bo = bo;
if (intel_fb_uses_dpt(&fb->base)) if (intel_fb_uses_dpt(&fb->base))
ret = __xe_pin_fb_vma_dpt(fb, view, vma, physical_alignment); ret = __xe_pin_fb_vma_dpt(fb, view, vma, alignment);
else else
ret = __xe_pin_fb_vma_ggtt(fb, view, vma, physical_alignment); ret = __xe_pin_fb_vma_ggtt(fb, view, vma, alignment);
if (ret) if (ret)
goto err_unpin; goto err_unpin;
...@@ -422,7 +422,7 @@ int intel_plane_pin_fb(struct intel_plane_state *new_plane_state, ...@@ -422,7 +422,7 @@ int intel_plane_pin_fb(struct intel_plane_state *new_plane_state,
struct i915_vma *vma; struct i915_vma *vma;
struct intel_framebuffer *intel_fb = to_intel_framebuffer(fb); struct intel_framebuffer *intel_fb = to_intel_framebuffer(fb);
struct intel_plane *plane = to_intel_plane(new_plane_state->uapi.plane); struct intel_plane *plane = to_intel_plane(new_plane_state->uapi.plane);
u64 phys_alignment = plane->min_alignment(plane, fb, 0); unsigned int alignment = plane->min_alignment(plane, fb, 0);
if (reuse_vma(new_plane_state, old_plane_state)) if (reuse_vma(new_plane_state, old_plane_state))
return 0; return 0;
...@@ -430,7 +430,7 @@ int intel_plane_pin_fb(struct intel_plane_state *new_plane_state, ...@@ -430,7 +430,7 @@ int intel_plane_pin_fb(struct intel_plane_state *new_plane_state,
/* We reject creating !SCANOUT fb's, so this is weird.. */ /* We reject creating !SCANOUT fb's, so this is weird.. */
drm_WARN_ON(bo->ttm.base.dev, !(bo->flags & XE_BO_FLAG_SCANOUT)); drm_WARN_ON(bo->ttm.base.dev, !(bo->flags & XE_BO_FLAG_SCANOUT));
vma = __xe_pin_fb_vma(intel_fb, &new_plane_state->view.gtt, phys_alignment); vma = __xe_pin_fb_vma(intel_fb, &new_plane_state->view.gtt, alignment);
if (IS_ERR(vma)) if (IS_ERR(vma))
return PTR_ERR(vma); return PTR_ERR(vma);
......
...@@ -320,7 +320,7 @@ static void xe_rtp_process_to_sr_tests(struct kunit *test) ...@@ -320,7 +320,7 @@ static void xe_rtp_process_to_sr_tests(struct kunit *test)
count_rtp_entries++; count_rtp_entries++;
xe_rtp_process_ctx_enable_active_tracking(&ctx, &active, count_rtp_entries); xe_rtp_process_ctx_enable_active_tracking(&ctx, &active, count_rtp_entries);
xe_rtp_process_to_sr(&ctx, param->entries, reg_sr); xe_rtp_process_to_sr(&ctx, param->entries, count_rtp_entries, reg_sr);
xa_for_each(&reg_sr->xa, idx, sre) { xa_for_each(&reg_sr->xa, idx, sre) {
if (idx == param->expected_reg.addr) if (idx == param->expected_reg.addr)
......
...@@ -1496,14 +1496,6 @@ void xe_guc_stop(struct xe_guc *guc) ...@@ -1496,14 +1496,6 @@ void xe_guc_stop(struct xe_guc *guc)
int xe_guc_start(struct xe_guc *guc) int xe_guc_start(struct xe_guc *guc)
{ {
if (!IS_SRIOV_VF(guc_to_xe(guc))) {
int err;
err = xe_guc_pc_start(&guc->pc);
xe_gt_WARN(guc_to_gt(guc), err, "Failed to start GuC PC: %pe\n",
ERR_PTR(err));
}
return xe_guc_submit_start(guc); return xe_guc_submit_start(guc);
} }
......
...@@ -400,10 +400,9 @@ xe_hw_engine_setup_default_lrc_state(struct xe_hw_engine *hwe) ...@@ -400,10 +400,9 @@ xe_hw_engine_setup_default_lrc_state(struct xe_hw_engine *hwe)
PREEMPT_GPGPU_THREAD_GROUP_LEVEL)), PREEMPT_GPGPU_THREAD_GROUP_LEVEL)),
XE_RTP_ENTRY_FLAG(FOREACH_ENGINE) XE_RTP_ENTRY_FLAG(FOREACH_ENGINE)
}, },
{}
}; };
xe_rtp_process_to_sr(&ctx, lrc_setup, &hwe->reg_lrc); xe_rtp_process_to_sr(&ctx, lrc_setup, ARRAY_SIZE(lrc_setup), &hwe->reg_lrc);
} }
static void static void
...@@ -459,10 +458,9 @@ hw_engine_setup_default_state(struct xe_hw_engine *hwe) ...@@ -459,10 +458,9 @@ hw_engine_setup_default_state(struct xe_hw_engine *hwe)
XE_RTP_ACTIONS(SET(CSFE_CHICKEN1(0), CS_PRIORITY_MEM_READ, XE_RTP_ACTIONS(SET(CSFE_CHICKEN1(0), CS_PRIORITY_MEM_READ,
XE_RTP_ACTION_FLAG(ENGINE_BASE))) XE_RTP_ACTION_FLAG(ENGINE_BASE)))
}, },
{}
}; };
xe_rtp_process_to_sr(&ctx, engine_entries, &hwe->reg_sr); xe_rtp_process_to_sr(&ctx, engine_entries, ARRAY_SIZE(engine_entries), &hwe->reg_sr);
} }
static const struct engine_info *find_engine_info(enum xe_engine_class class, int instance) static const struct engine_info *find_engine_info(enum xe_engine_class class, int instance)
......
...@@ -781,7 +781,9 @@ void xe_mocs_dump(struct xe_gt *gt, struct drm_printer *p) ...@@ -781,7 +781,9 @@ void xe_mocs_dump(struct xe_gt *gt, struct drm_printer *p)
flags = get_mocs_settings(xe, &table); flags = get_mocs_settings(xe, &table);
xe_pm_runtime_get_noresume(xe); xe_pm_runtime_get_noresume(xe);
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT); fw_ref = xe_force_wake_get(gt_to_fw(gt),
flags & HAS_LNCF_MOCS ?
XE_FORCEWAKE_ALL : XE_FW_GT);
if (!fw_ref) if (!fw_ref)
goto err_fw; goto err_fw;
......
...@@ -88,7 +88,6 @@ static const struct xe_rtp_entry_sr register_whitelist[] = { ...@@ -88,7 +88,6 @@ static const struct xe_rtp_entry_sr register_whitelist[] = {
RING_FORCE_TO_NONPRIV_ACCESS_RD | RING_FORCE_TO_NONPRIV_ACCESS_RD |
RING_FORCE_TO_NONPRIV_RANGE_4)) RING_FORCE_TO_NONPRIV_RANGE_4))
}, },
{}
}; };
static void whitelist_apply_to_hwe(struct xe_hw_engine *hwe) static void whitelist_apply_to_hwe(struct xe_hw_engine *hwe)
...@@ -137,7 +136,8 @@ void xe_reg_whitelist_process_engine(struct xe_hw_engine *hwe) ...@@ -137,7 +136,8 @@ void xe_reg_whitelist_process_engine(struct xe_hw_engine *hwe)
{ {
struct xe_rtp_process_ctx ctx = XE_RTP_PROCESS_CTX_INITIALIZER(hwe); struct xe_rtp_process_ctx ctx = XE_RTP_PROCESS_CTX_INITIALIZER(hwe);
xe_rtp_process_to_sr(&ctx, register_whitelist, &hwe->reg_whitelist); xe_rtp_process_to_sr(&ctx, register_whitelist, ARRAY_SIZE(register_whitelist),
&hwe->reg_whitelist);
whitelist_apply_to_hwe(hwe); whitelist_apply_to_hwe(hwe);
} }
......
...@@ -90,11 +90,10 @@ static int emit_flush_dw(u32 *dw, int i) ...@@ -90,11 +90,10 @@ static int emit_flush_dw(u32 *dw, int i)
return i; return i;
} }
static int emit_flush_imm_ggtt(u32 addr, u32 value, bool invalidate_tlb, static int emit_flush_imm_ggtt(u32 addr, u32 value, u32 flags, u32 *dw, int i)
u32 *dw, int i)
{ {
dw[i++] = MI_FLUSH_DW | MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_IMM_DW | dw[i++] = MI_FLUSH_DW | MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_IMM_DW |
(invalidate_tlb ? MI_INVALIDATE_TLB : 0); flags;
dw[i++] = addr | MI_FLUSH_DW_USE_GTT; dw[i++] = addr | MI_FLUSH_DW_USE_GTT;
dw[i++] = 0; dw[i++] = 0;
dw[i++] = value; dw[i++] = value;
...@@ -111,16 +110,13 @@ static int emit_bb_start(u64 batch_addr, u32 ppgtt_flag, u32 *dw, int i) ...@@ -111,16 +110,13 @@ static int emit_bb_start(u64 batch_addr, u32 ppgtt_flag, u32 *dw, int i)
return i; return i;
} }
static int emit_flush_invalidate(u32 flag, u32 *dw, int i) static int emit_flush_invalidate(u32 *dw, int i)
{ {
dw[i] = MI_FLUSH_DW; dw[i++] = MI_FLUSH_DW | MI_INVALIDATE_TLB | MI_FLUSH_DW_OP_STOREDW |
dw[i] |= flag; MI_FLUSH_IMM_DW | MI_FLUSH_DW_STORE_INDEX;
dw[i++] |= MI_INVALIDATE_TLB | MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_IMM_DW | dw[i++] = LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR;
MI_FLUSH_DW_STORE_INDEX; dw[i++] = 0;
dw[i++] = LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR | MI_FLUSH_DW_USE_GTT;
dw[i++] = 0; dw[i++] = 0;
dw[i++] = ~0U;
return i; return i;
} }
...@@ -257,7 +253,7 @@ static void __emit_job_gen12_simple(struct xe_sched_job *job, struct xe_lrc *lrc ...@@ -257,7 +253,7 @@ static void __emit_job_gen12_simple(struct xe_sched_job *job, struct xe_lrc *lrc
if (job->ring_ops_flush_tlb) { if (job->ring_ops_flush_tlb) {
dw[i++] = preparser_disable(true); dw[i++] = preparser_disable(true);
i = emit_flush_imm_ggtt(xe_lrc_start_seqno_ggtt_addr(lrc), i = emit_flush_imm_ggtt(xe_lrc_start_seqno_ggtt_addr(lrc),
seqno, true, dw, i); seqno, MI_INVALIDATE_TLB, dw, i);
dw[i++] = preparser_disable(false); dw[i++] = preparser_disable(false);
} else { } else {
i = emit_store_imm_ggtt(xe_lrc_start_seqno_ggtt_addr(lrc), i = emit_store_imm_ggtt(xe_lrc_start_seqno_ggtt_addr(lrc),
...@@ -273,7 +269,7 @@ static void __emit_job_gen12_simple(struct xe_sched_job *job, struct xe_lrc *lrc ...@@ -273,7 +269,7 @@ static void __emit_job_gen12_simple(struct xe_sched_job *job, struct xe_lrc *lrc
dw, i); dw, i);
} }
i = emit_flush_imm_ggtt(xe_lrc_seqno_ggtt_addr(lrc), seqno, false, dw, i); i = emit_flush_imm_ggtt(xe_lrc_seqno_ggtt_addr(lrc), seqno, 0, dw, i);
i = emit_user_interrupt(dw, i); i = emit_user_interrupt(dw, i);
...@@ -319,7 +315,7 @@ static void __emit_job_gen12_video(struct xe_sched_job *job, struct xe_lrc *lrc, ...@@ -319,7 +315,7 @@ static void __emit_job_gen12_video(struct xe_sched_job *job, struct xe_lrc *lrc,
if (job->ring_ops_flush_tlb) if (job->ring_ops_flush_tlb)
i = emit_flush_imm_ggtt(xe_lrc_start_seqno_ggtt_addr(lrc), i = emit_flush_imm_ggtt(xe_lrc_start_seqno_ggtt_addr(lrc),
seqno, true, dw, i); seqno, MI_INVALIDATE_TLB, dw, i);
dw[i++] = preparser_disable(false); dw[i++] = preparser_disable(false);
...@@ -336,7 +332,7 @@ static void __emit_job_gen12_video(struct xe_sched_job *job, struct xe_lrc *lrc, ...@@ -336,7 +332,7 @@ static void __emit_job_gen12_video(struct xe_sched_job *job, struct xe_lrc *lrc,
dw, i); dw, i);
} }
i = emit_flush_imm_ggtt(xe_lrc_seqno_ggtt_addr(lrc), seqno, false, dw, i); i = emit_flush_imm_ggtt(xe_lrc_seqno_ggtt_addr(lrc), seqno, 0, dw, i);
i = emit_user_interrupt(dw, i); i = emit_user_interrupt(dw, i);
...@@ -413,7 +409,7 @@ static void emit_migration_job_gen12(struct xe_sched_job *job, ...@@ -413,7 +409,7 @@ static void emit_migration_job_gen12(struct xe_sched_job *job,
if (!IS_SRIOV_VF(gt_to_xe(job->q->gt))) { if (!IS_SRIOV_VF(gt_to_xe(job->q->gt))) {
/* XXX: Do we need this? Leaving for now. */ /* XXX: Do we need this? Leaving for now. */
dw[i++] = preparser_disable(true); dw[i++] = preparser_disable(true);
i = emit_flush_invalidate(0, dw, i); i = emit_flush_invalidate(dw, i);
dw[i++] = preparser_disable(false); dw[i++] = preparser_disable(false);
} }
......
...@@ -237,6 +237,7 @@ static void rtp_mark_active(struct xe_device *xe, ...@@ -237,6 +237,7 @@ static void rtp_mark_active(struct xe_device *xe,
* the save-restore argument. * the save-restore argument.
* @ctx: The context for processing the table, with one of device, gt or hwe * @ctx: The context for processing the table, with one of device, gt or hwe
* @entries: Table with RTP definitions * @entries: Table with RTP definitions
* @n_entries: Number of entries to process, usually ARRAY_SIZE(entries)
* @sr: Save-restore struct where matching rules execute the action. This can be * @sr: Save-restore struct where matching rules execute the action. This can be
* viewed as the "coalesced view" of multiple the tables. The bits for each * viewed as the "coalesced view" of multiple the tables. The bits for each
* register set are expected not to collide with previously added entries * register set are expected not to collide with previously added entries
...@@ -247,6 +248,7 @@ static void rtp_mark_active(struct xe_device *xe, ...@@ -247,6 +248,7 @@ static void rtp_mark_active(struct xe_device *xe,
*/ */
void xe_rtp_process_to_sr(struct xe_rtp_process_ctx *ctx, void xe_rtp_process_to_sr(struct xe_rtp_process_ctx *ctx,
const struct xe_rtp_entry_sr *entries, const struct xe_rtp_entry_sr *entries,
size_t n_entries,
struct xe_reg_sr *sr) struct xe_reg_sr *sr)
{ {
const struct xe_rtp_entry_sr *entry; const struct xe_rtp_entry_sr *entry;
...@@ -259,7 +261,9 @@ void xe_rtp_process_to_sr(struct xe_rtp_process_ctx *ctx, ...@@ -259,7 +261,9 @@ void xe_rtp_process_to_sr(struct xe_rtp_process_ctx *ctx,
if (IS_SRIOV_VF(xe)) if (IS_SRIOV_VF(xe))
return; return;
for (entry = entries; entry && entry->name; entry++) { xe_assert(xe, entries);
for (entry = entries; entry - entries < n_entries; entry++) {
bool match = false; bool match = false;
if (entry->flags & XE_RTP_ENTRY_FLAG_FOREACH_ENGINE) { if (entry->flags & XE_RTP_ENTRY_FLAG_FOREACH_ENGINE) {
......
...@@ -430,7 +430,7 @@ void xe_rtp_process_ctx_enable_active_tracking(struct xe_rtp_process_ctx *ctx, ...@@ -430,7 +430,7 @@ void xe_rtp_process_ctx_enable_active_tracking(struct xe_rtp_process_ctx *ctx,
void xe_rtp_process_to_sr(struct xe_rtp_process_ctx *ctx, void xe_rtp_process_to_sr(struct xe_rtp_process_ctx *ctx,
const struct xe_rtp_entry_sr *entries, const struct xe_rtp_entry_sr *entries,
struct xe_reg_sr *sr); size_t n_entries, struct xe_reg_sr *sr);
void xe_rtp_process(struct xe_rtp_process_ctx *ctx, void xe_rtp_process(struct xe_rtp_process_ctx *ctx,
const struct xe_rtp_entry *entries); const struct xe_rtp_entry *entries);
......