This interface was defined completely wrong, however userspace has only
ever used 4 values from it (0x1, 0x2, 0x3 and 0x6), so fix the interface to do what userspace actually expected but define new defines for new users to use
it properly.
Previously, the R300_CMD_WAIT command would write the passed directly to the
hardware. However this is incorrect because the R300_WAIT_* values used are
internal interface values that do not map directly to the hardware.
The new function I have added translates the R300_WAIT_* values into appropriate
values for the hardware before writing the register.
Thanks to John Bridgman for pointing this out. :-)
More or less a workaround for issues on some chipsets where a context
switch results in critical data in PRAMIN being overwritten by the GPU.
The correct fix is known, but may take some time before it's a feasible
option.
This is the correct fix for the RS690 and hopefully the dma coherent work.
For now we limit everybody to a 32-bit DMA mask but it is possible for
RS690 to use a 40-bit DMA mask for the GART table itself,
and the PCIE cards can use 40-bits for the table entries.
Signed-off-by: Dave Airlie <airlied@redhat.com>
DMA command submission. It's worth remembering that all new bright ideas on how
to make this command reader work properly and according to docs
will probably fail :( Bring in some old code.
If we ever want to be able to use the 3D engine we have no choice. It
appears that the tiling setup (required for 3D on G8x) is in the page tables.
The immediate benefit of this change however is that it's now not possible
for a client to use the GPU to render over the top of important engine setup
tables, which also live in VRAM.
G8x VRAM size is limited to 512MiB at the moment, as we use a 1-1 mapping
of real vram pages to their offset within the start of a channel's VRAM
DMA object and only populate a single PDE for VRAM use.
Conflicts:
linux-core/drm_compat.c
linux-core/drm_compat.h
linux-core/drm_ttm.c
shared-core/i915_dma.c
Bump driver minor to 13 due to introduction of new
relocation type.
My 965GM gets interrupts stuck when using the old PIPE_VBLANK interrupt.
Switch to the PIPE_EVENT interrupt mechanism, and set the PIPE*STAT
registers to use START_VBLANK on 965 and VBLANK on previous chips.
Will hopefully work a bit better than previous code, which depended on
knowing the channel's most recent PUT value. Some chips always return
0 on reading these regs, and currently userspace is the only other entity
which knows the value.
Texture uploads could hit the blitter coordinate limit, adjust the texture
offset when uploading the pieces. Make sure to check the end address of the
upload too.
Make sure we have enough room for all the GR registers or we'll end up
clobbering the AR index register (which should actually be harmless
unless the BIOS is making an assumption about it).
On resume, if the interrupt state isn't restored correctly, we may end
up with a flood of unexpected or ill-timed interrupts, which could cause
the kernel to disable the interrupt or vblank events to happen at the
wrong time. So save/restore them properly.
We need to return an accurate vblank count to the callers of
->get_vblank_counter, and in the Intel case the actual frame count
register isn't udpated until the next active line is displayed, so we
need to return one more than the frame count register if we're currently
in a vblank period.
However, none of the various ways of doing this is working yet, so
disable the logic for now. This may result in a few missed events, but
should fix the hangs some people have seen due to the current code
tripping the wraparound logic in drm_update_vblank_count.
The frame count registers don't increment until the start of the next
frame, so make sure we return an incremented count if called during the
actual vblank period.
Ack the IRQs correctly (PIPExSTAT first followed by IIR). Don't read
vblank counter registers on disabled pipes (might hang otherwise). And
deal with flipped pipe/plane mappings if present.
This is necessary for AGP to work after running bios init scripts on nv3x, and
is seen in mmio traces of all cards (nv04-nv4x)
I'm not making the equivalent change to nv40_mc.c, as early cards (6200, 6800gt)
use the 0x000018XX PBUS and later cards use the 0x000880XX PBUS and I don't know
the effects of using the wrong one