Clean up queues, free objects. On the next entervt, unmark the hardware to
let the user try again (presumably after resetting the chip). Someday we'll
automatically recover...
find_or_create_page doesn't quite set up pages correctly; any newly created
pages aren't hooked into the shmem object quite right; user space mmaps of
those pages end up mapping pages full of zeros which then get written to the
real pages inappropriately. This patch requires that the kernel export
shmem_getpage.
When a software fallback has completed, usermode must notify the kernel so
that any scanout buffers can be synchronized. This ioctl should be called
whenever a fallback completes to flush CPU and chipset caches.
Lots of conflicts, seems to load ok, but I'm sure some bugs snuck in.
Conflicts:
linux-core/drmP.h
linux-core/drm_lock.c
linux-core/i915_gem.c
shared-core/drm.h
shared-core/i915_dma.c
shared-core/i915_drv.h
shared-core/i915_irq.c
Record the last execbuffer sequence for each client.
Record that sequence in the throttle ioctl as the 'throttle sequence'.
Wait for the last throttle sequence in the throttle ioctl.
We want request retirement to occur about once a second when the request
queue is non-empty. This was done with a timer that queued a work_struct,
using a delayed_work instead makes a lot more sense.
This was insufficient once we started masking interrupts to only when someone
was waiting for them (and would thus retire requests themselves). It was
replaced by the retire_timer.
Use GEM for ring buffer setup and framebuffer allocation. This means reworking
the hardware status page stuff a bit (just use the basic range allocator for
vram for now) and #ifdef'ing out the TTM & DRI2 code. Works well enough to
load/unload several times and display fbcon on my T61 (though there's still
some unexplained console corruption).
This is the create (may want location flags), pread/pwrite/mmap
(performance tuning hints), and set_domain (will 32 bits be enough for
everyone?) ioctls. Left in the generic set are just flink/open/close.
The 2D driver must be updated for this change, and API but not ABI is broken
for 3D. The driver version is bumped to mark this.
Use new GEM based ring buffer initialization. Still need to init GEM & use it
for framebuffer allocation etc.
Conflicts:
shared-core/i915_dma.c
shared-core/i915_drv.h
This requires that the X Server use the execbuf interface for buffer
submission, as it no longer has direct access to the ring. This is
therefore a flag day for the gem interface.
This also adds enter/leavevt ioctls for use by the X Server. These would
get stubbed out in a modesetting implementation, but are required while
in an environment where the device's state is only managed by the DRM while
X has the VT.
Without the user IRQ running constantly, there's no wakeup when the ring
empties to go retire requests and free buffers. Use a 1 second timer to make
that happen more often.
Instead of throttling and execbuffer time, have the application ask to
throttle explicitly. This allows the throttle to happen less often, and
without holding the DRM lock.
The interrupt enable register cannot be used to temporarily disable
interrupts, instead use the interrupt mask register.
Note that this change means that a pile of buffers will be left stuck on the
chip as the final interrupts will not be recognized to come and drain things.
Add code to get panel modes from the VBIOS if present and check whether certain
outputs exist. Should make our display detection code a little more robust.
Recording the tail pointer in a local variable improves performance, but if
someone messes up and fails to reload at the right time, the driver will
write commands to the wrong part of the ring and scramble execution badly.
This change (available by setting I915_RING_VALIDATE to 1) checks to make
sure the cached tail pointer matches the hardware tail pointer at each ring
buffer addition, calling BUG_ON when that's not true.
There are now 3 lists. Active is buffers currently in the ringbuffer.
Flushing is not in the ringbuffer, but needs a flush before unbinding.
Inactive is as before. This prevents object_free → unbind →
wait_rendering → object_reference and a kernel oops about weird refcounting.
This also avoids an synchronous extra flush and wait when freeing a buffer
which had a write_domain set (such as a temporary rendered to and then from
using the 2d engine). It will sit around on the flushing list until the
appropriate flush gets emitted, or we need the GTT space for another
operation.
This lets us get some qualities we desire, such as using the full 32-bit
range (except zero), avoiding DRM_WAIT_ON, and a 1:1 mapping of active
sequence numbers to request structs, which will be used soon for throttling
and interrupt-driven list cleanup.
Additionally, a boolean active field is added to indicate which list an
object is on, rather than smashing last_rendering_cookie to 0 to show
inactive. This will help with flush-reduction later on, and makes the code
clearer.
It would be nice if one day the DRM driver was the canonical source for
register definitions and core macros. To that end, this patch cleans
things up quite a bit, removing redundant definitions (some with
different names referring to the same register) and generally tidying up
the header file.
pread and pwrite must update the memory domains to ensure consistency with
the GPU. At some point, it should be possible to avoid clflush through this
path, but that isn't working for me.
Now, the LRU list has objects that are completely done rendering and ready
to kick out, while the execution list has things with active rendering,
which have associated cookies and reference counts on them.
Domain information is about buffer relationships, not buffer contents. That
means a relocation contains the domain information as it knows how the
source buffer references the target buffer.
This also adds the set_domain ioctl so that user space can move buffers to
the cpu domain.
If objects on the lru aren't ref counted, they'll get pulled from the gtt as
soon as they are freed. This change does cause objects to get stuck in the
gtt until they're forced out by new requests. The lru should get cleaned
when the irq occurs.