Optimus.mdwn 9.33 KB
Newer Older
1
<meta name="google-translate-customization" content="38b387022ed0f4d4-a4eb7ef5c10c8ae0-g2870fab75904ce51-18"></meta>
2
3
<div id="google_translate_element"></div>
<script type="text/javascript" src="/wiki/translate.js"></script>
4

Joe Rayhawk's avatar
Joe Rayhawk committed
5
6
7
8
# Nvidia Optimus

'Optimus technology' is a software [and possibly hardware] solution for automatically switching between an integrated graphics chip or IGP (such as on onboard intel chip) and a more powerful [nvidia] graphics chip. This technology is intended specifically for laptops. The precursor to this technology was 'switchable graphics,' in which the user could manually switch between the graphics card. It may require that the Nvidia GPU has the PCOPY engine. 

mperes's avatar
mperes committed
9
The graphics system in a laptop has a GPU with some memory, in the case of an IGP, this memory may be a piece of system memory, but otherwise it is usually dedicated memory living on the GPU. This GPU connects to a laptop display, or output port. There are two main problems to solve in order to support optimus under linux: 
Joe Rayhawk's avatar
Joe Rayhawk committed
10

mperes's avatar
mperes committed
11
1) Currently we do not have a way to a priori know what outputs (displays) are connected to what GPU's.  
Joe Rayhawk's avatar
Joe Rayhawk committed
12

mperes's avatar
mperes committed
13
2) The supposed optimus software, should perform the task of switching between the which of the two graphics processors drives your display. Ideally this would be done by directly flipping a hardware switch, called a mux ([[multiplexer|http://en.wikipedia.org/wiki/Multiplexer]]). However such a mux does not always exist! 
Joe Rayhawk's avatar
Joe Rayhawk committed
14

mperes's avatar
mperes committed
15
If a hardware mux does not exist, there there is no physical way to perform this GPU switching. Thus Optimus is used to effectively "implement" a software mux. Specifically it ensures that relevant data is sent to and processed on the right GPU then the data needed for display is copied to the device that displays the image. 
Joe Rayhawk's avatar
Joe Rayhawk committed
16

mperes's avatar
mperes committed
17
When it comes to how a specific machine is configured, there are a number of possibilities. Again, if the hardware mux exists it would be used to select which GPU drives the internal panel, or the external monitor, or possibly both. It is also possible, that a GPU is hardwired to the internal panel, so the other GPU cannot possibly drive the internal panel. The same goes for external monitor output. In the worst case we have that the:the Intel GPU hardwired to the internal panel and the Nvidia GPU hardwired to the external output! The best case scenario is a mux, which can select which GPU drivers control which outputs. 
Joe Rayhawk's avatar
Joe Rayhawk committed
18

mperes's avatar
mperes committed
19
Basically, you can have _any_ combination of these possibilities. There is no standard on how to wire things. There should be ways to detect the wirings and whether there is a mux and where, but the documentation is not available to the developers (maybe you can help us figure out how to do this, have any ideas? You can also 'petition' nvidia for releasing these specs: [[nvidia customer help|http://nvidia.custhelp.com/]] ? ) 
Joe Rayhawk's avatar
Joe Rayhawk committed
20
21


mperes's avatar
mperes committed
22
## Switcheroo - Using one card at a time
Joe Rayhawk's avatar
Joe Rayhawk committed
23

mperes's avatar
mperes committed
24
If your laptop has a hardware mux, the kernel switcheroo driver may be able to set the wanted GPU at boot. There are also hacks based on the switcheroo, like asus-switcheroo, but they offer no extra value. If one of the hacks happens to work, and the switcheroo does not, the switcheroo has a bug. There might already be pending patches waiting to go towards the mainline kernel. 
Joe Rayhawk's avatar
Joe Rayhawk committed
25

mperes's avatar
mperes committed
26
In all other cases, you are stuck with what happens to work by default. No switching, no framebuffer copying. Yet. 
Joe Rayhawk's avatar
Joe Rayhawk committed
27

mperes's avatar
mperes committed
28
## Using Optimus/Prime
Joe Rayhawk's avatar
Joe Rayhawk committed
29

30
'PRIME GPU offloading' and 'Reverse PRIME' is an attempt to support muxless hybrid graphics in the Linux kernel. It requires:
Joe Rayhawk's avatar
Joe Rayhawk committed
31

tobijk's avatar
tobijk committed
32
33
34
35
36
37
### DRI2

#### Setup

* An updated graphic stack (Kernel, xserver and mesa).
* KMS drivers for both GPUs loaded.
mperes's avatar
mperes committed
38
* DDX drivers for both GPUs loaded.
Joe Rayhawk's avatar
Joe Rayhawk committed
39

tobijk's avatar
tobijk committed
40

mperes's avatar
mperes committed
41
If everything went well, *xrandr --listproviders* should list two providers. In my case, this gives:
Joe Rayhawk's avatar
Joe Rayhawk committed
42

mperes's avatar
mperes committed
43
44
45
46
    $ xrandr --listproviders 
    Providers: number : 2
    Provider 0: id: 0x8a cap: 0xb, Source Output, Sink Output, Sink Offload crtcs: 2 outputs: 2 associated providers: 1 name:Intel
    Provider 1: id: 0x66 cap: 0x7, Source Output, Sink Output, Source Offload crtcs: 2 outputs: 5 associated providers: 1 name:nouveau
Joe Rayhawk's avatar
Joe Rayhawk committed
47

tobijk's avatar
tobijk committed
48
#### Offloading 3D
49

mperes's avatar
mperes committed
50
It is then important to tell Prime what card should be used for offloading. In my case, I would like to use Nouveau for offloading the Intel card:
Joe Rayhawk's avatar
Joe Rayhawk committed
51

mperes's avatar
mperes committed
52
    $ xrandr --setprovideroffloadsink nouveau Intel
Joe Rayhawk's avatar
Joe Rayhawk committed
53

mperes's avatar
mperes committed
54
55
When this is done, it becomes very easy to select which card should be used. If you want to offload an application to a GPU, use DRI_PRIME=1. When the application is launched, it will use the second card to do its rendering. If you want to use
the "regular" GPU, set DRI_PRIME to 0 or omit it. The behaviour can be seen in the following example:
Joe Rayhawk's avatar
Joe Rayhawk committed
56

mperes's avatar
mperes committed
57
58
59
60
    $ DRI_PRIME=0 glxinfo | grep "OpenGL vendor string"
    OpenGL vendor string: Intel Open Source Technology Center
    $ DRI_PRIME=1 glxinfo | grep "OpenGL vendor string"
    OpenGL vendor string: nouveau
Joe Rayhawk's avatar
Joe Rayhawk committed
61

tobijk's avatar
tobijk committed
62
#### Using outputs on discrete GPU
63
64
65
66
67
68
69
70
71
72
73

If the second GPU has outputs that aren't accessible by the primary GPU, you can use "Reverse PRIME" to make use of them. This will involve using the primary GPU to render the images, and then pass them off to the secondary GPU. In the scenario above, you would do

    $ xrandr --setprovideroutputsource nouveau Intel

When this is done, the nvidia card's outputs should be available in xrandr, and you could do something like

    $ xrandr --output HDMI-1 --auto --above LVDS1

in order to add a second screen that is hosted by the nvidia card.

tobijk's avatar
tobijk committed
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
### DRI3

#### Setup

The implementation of DRI3 aims for a more convenient way to use a PRIME setup. It requires some additional setup steps:

* A Kernel version 3.17 or newer with Render-Nodes - 3.16 only works when booting with *drm.rnodes=1*.
* XServer 1.16 with DRI3 support.
* Mesa 10.3 with DRI3 support.
* KMS drivers for both GPUs loaded.
* DDX drivers for the primary GPU loaded.


*Attention: Render-Nodes requires the user to be in the "video" group*

If everything went well, offloading to the secondary GPU is done with DRI_PRIME=1:

    $ DRI_PRIME=0 glxinfo | grep "OpenGL vendor string"
    OpenGL vendor string: Intel Open Source Technology Center
    $ DRI_PRIME=1 glxinfo | grep "OpenGL vendor string"
    OpenGL vendor string: nouveau


97
98
### Power management

99
When an application is using 'PRIME GPU offloading', both the discrete and the integrated GPUs are active and aside from optimizations at the driver level, nothing else can be done. However, when no application is making use of the discrete GPU, the default behaviour should be for the card to automatically power down entirely after 5 seconds. Note that using an output on the discrete GPU will force it to stay on.
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118

This dynamic power management feature has been added in Linux 3.12 but requires Linux 3.13 to work properly with Nouveau. If you cannot make use of this feature and do not mind not using your NVIDIA GPU, it is recommended to blacklist the 'nouveau' module and to use bbswitch to turn off the NVIDIA GPU. Look onto your distribution's wiki for more information.

#### Checking the current power state

You can query the current power state and policy by running as root:

     # cat /sys/kernel/debug/vgaswitcheroo/switch
     0:DIS: :DynOff:0000:01:00.0
     1:IGD:+:Pwr:0000:00:02.0
     2:DIS-Audio: :Off:0000:01:00.1

Each line of the output is of the following format:

 * A number: not important
 * A string:
  * DIS: Discrete GPU (your AMD or NVIDIA GPU)
  * IGD: Integrated Graphics (your Intel card?)
  * DIS-Audio: The audio device exported by your discrete GPU for HDMI sound playback
119
120
121
 * A sign:
  * '+': This device is connected to graphics connectors
  * ' ': This device is not connected to graphics connectors
122
123
124
 * A power state:
  * OFF: The device is powered off
  * ON: The device is powered on
125
126
  * DynOff: The device is currently powered off but will be powered on when needed 
  * DynPwr: The device is currently powered on but will be powered off when not needed
127
128
 * The PCI-ID of the device

129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
#### Forcing the power state of the devices

Turn on the GPU that is not currently driving the outputs:

     echo ON > /sys/kernel/debug/vgaswitcheroo/switch

Turn off the GPU that is not currently driving the outputs:

     echo OFF > /sys/kernel/debug/vgaswitcheroo/switch

Connect the graphics connectors to the integrated GPU:

    echo IGD > /sys/kernel/debug/vgaswitcheroo/switch

Connect the graphics connectors to the discrete GPU:

     echo DIS > /sys/kernel/debug/vgaswitcheroo/switch

Prepare a switch to the integrated GPU to occur when the X server gets restarted:

     echo DIGD > /sys/kernel/debug/vgaswitcheroo/switch

Prepare a switch to the discrete GPU to occur when the X server gets restarted:

     echo DDIS > /sys/kernel/debug/vgaswitcheroo/switch
154

mperes's avatar
mperes committed
155
### Known issues
Joe Rayhawk's avatar
Joe Rayhawk committed
156

mperes's avatar
mperes committed
157
#### Everything seems to work but the output is black
tobijk's avatar
tobijk committed
158
*This is only a problem with DRI2*
mperes's avatar
mperes committed
159
160
161

Try using a re-parenting compositor. Those compositors usually provide 3D effects.

162
*WARNING*: Currently, Kwin only works when using the desktop effects. In the case where the window would be pure black, please try minimizing/maximizing or redimensioning the window. This bug is being investigated.
mperes's avatar
mperes committed
163
164

#### Poor performance when using the Nouveau card
Joe Rayhawk's avatar
Joe Rayhawk committed
165

mperes's avatar
mperes committed
166
Right now, Nouveau does not support reclocking and other power management feature. This cripples the performance of the GPU a lot along with increasing the power consumption compared to the proprietary driver.
Joe Rayhawk's avatar
Joe Rayhawk committed
167

mperes's avatar
mperes committed
168
Using Prime with Nouveau may not result in any performance gain right now, but it should in a not-so-distant future.