The algorithm we use for resolving parallel copy instructions plays this little shell game with the values. The reason for this is that it lets us handle cases where, for instance we nave a -> b and b -> a and we need to use a temporary to do a swap. One result of this algorithm is that it tends to emit a lot of mov chains which are typcially really bad for GPUs where a mov is far from free. For instance, it's likely to turn this:
r16 = ssa_0; r17 = ssa_0; r18 = ssa_0; r15 = ssa_0
r15 = mov ssa_0 r18 = mov r15 r17 = mov r18 r16 = mov r17
which, if it's the only thing in a block (this is common for phis) is impossible for a scheduler to fix because of the dependencies and you end up with significant stalling. If, on the other hand, we only do the chaining in the actual case where we need to free up a so that it can be used as a destination, we can emit this:
r15 = mov ssa_0 r18 = mov ssa_0 r17 = mov ssa_0 r16 = mov ssa_0
which is far nicer to the scheduler. On Intel, our copy propagation pass will undo the chain for us so this has no shader-db impact. However, for less intelligent back-ends, it's probably a lot better.