Without the user IRQ running constantly, there's no wakeup when the ring
empties to go retire requests and free buffers. Use a 1 second timer to make
that happen more often.
Instead of throttling and execbuffer time, have the application ask to
throttle explicitly. This allows the throttle to happen less often, and
without holding the DRM lock.
Without kernel modesetting, this requires cooperation of the userspace
modesetting driver. We may have to leave the vblank interrupt enabled otherwise
to avoid problems.
Only compensate when the driver counter actually appears to have moved
backwards.
The compensation deltas need to be incremental instead of absolute; drop the
vblank_offset field and just use atomic_sub().
Turns out the radeon driver is affected by the same problem that prompted i915
to revert to less useful counter flipping at the end of the vblank interval. In
the long term, we can hopefully implement more reliable methods to achieve
counter flipping at the beginning of vblank, but otherwise this should be an
acceptable workaround.
This reverts commit 6671ad1917.
The vblank ioctl needs to update the userspace parameters when interrupted by
a signal, which was prevented by this. Let's see if this breaks other ioctls...
set_domain can block waiting for rendering to complete. If that process is
interrupted by a signal, it can return -EINTR. Catch this error in all
callers and correctly deal with the result.
Object domain transfer can involve adding flush ops to the request queue,
and so the DRM lock must be held to avoid having the X server smash pointers
badly.
The interrupt enable register cannot be used to temporarily disable
interrupts, instead use the interrupt mask register.
Note that this change means that a pile of buffers will be left stuck on the
chip as the final interrupts will not be recognized to come and drain things.
When reading from multiple domains, allow each cache to continue
to hold data until writes occur somewhere. This is done by
first leaving the read_domains alone at bind time (presumably the CPU read
cache contains valid data still) and then in set_domain, if no write_domain
is specified, the new read domains are simply merged into the existing read
domains.
A huge comment was added above set_domain to explain how things are
expected to work.
Newly allocated objects need to be in the CPU domain as they've just been
cleared by the CPU. Also, unmapping objects from the GTT needs to put them
into the CPU domain, both to flush rendering as well as to ensure that any
paging action gets flushed before we remap to the GTT.
Commands in the ring are parsed and started when the head pointer passes by
them, but they are not necessarily finished until a MI_FLUSH happens. This
patch inserts a flush after the execbuffer (the only place a flush wasn't
already happening).
There are now 3 lists. Active is buffers currently in the ringbuffer.
Flushing is not in the ringbuffer, but needs a flush before unbinding.
Inactive is as before. This prevents object_free → unbind →
wait_rendering → object_reference and a kernel oops about weird refcounting.
This also avoids an synchronous extra flush and wait when freeing a buffer
which had a write_domain set (such as a temporary rendered to and then from
using the 2d engine). It will sit around on the flushing list until the
appropriate flush gets emitted, or we need the GTT space for another
operation.
This lets us get some qualities we desire, such as using the full 32-bit
range (except zero), avoiding DRM_WAIT_ON, and a 1:1 mapping of active
sequence numbers to request structs, which will be used soon for throttling
and interrupt-driven list cleanup.
Additionally, a boolean active field is added to indicate which list an
object is on, rather than smashing last_rendering_cookie to 0 to show
inactive. This will help with flush-reduction later on, and makes the code
clearer.
No need to fill the ring that much; wait for it to become nearly empty
before adding the execbuffer request. A better fix will involve scheduling
ring insertion in the irq handler.
pread and pwrite must update the memory domains to ensure consistency with
the GPU. At some point, it should be possible to avoid clflush through this
path, but that isn't working for me.
The exec list contains all objects, in order of use. The lru list contains
only unpinned objects ready to be evicted. This required two changes -- the
first was to not migrate pinned objects from exec to lru, the second was to
search for the first unpinned object in the exec list when doing eviction.
Now, the LRU list has objects that are completely done rendering and ready
to kick out, while the execution list has things with active rendering,
which have associated cookies and reference counts on them.
Domain information is about buffer relationships, not buffer contents. That
means a relocation contains the domain information as it knows how the
source buffer references the target buffer.
This also adds the set_domain ioctl so that user space can move buffers to
the cpu domain.
This should already have been generally safe since we don't change contents
and put in new relocations between execbufs, so if we were writing in a new
relocation then we'd already waited rendering to complete when we moved
the target of the relocation. However, doing the right thing will be required
if we do buffer reuse.
The kernel has removed nopage so move the old nopage codepaths into a compat vm file and switch to using the fault paths.
nopfn is on its way out in the future also, so we should switch to using fault
for that path as well soon
If objects on the lru aren't ref counted, they'll get pulled from the gtt as
soon as they are freed. This change does cause objects to get stuck in the
gtt until they're forced out by new requests. The lru should get cleaned
when the irq occurs.
pages come back from find_or_create_page locked, but must not stay locked
for long. Unlock them immediately instead of waiting until we're done with
them to avoid deadlock when applications try to touch them.
Track named objects in /proc/dri/0/gem_names.
Track total object count in /proc/dri/0/gem_objects.
Initialize device gem data.
return -ENODEV for gem ioctls if the driver doesn't support gem.
Call unlock_page when unbinding from gtt.
Add numerous misssing calls to drm_gem_object_unreference.
Names are just another unique integer set (from another idr object).
Names are removed when the user refernces (handles) are all destroyed --
this required that handles for objects be counted separately from
internal kernel references (so that we can tell when the handles are all
gone).
mixed 32/64 bit systems need 'special' help for ioctl where the user-space
and kernel-space datatypes differ. Fixing the datatypes to be the same size,
and align the same way for both 32 and 64-bit ppc and x86 environments will
elimiante the need to have magic 32/64-bit ioctl translation code.
It's not really a graphics memory allocator, just something to track ranges
of address space. It doesn't involve actual allocation, and was consuming
some desired namespace.
This tries to automatically fetch a git revision string and if succeeds,
it #defines GIT_REVISION string macro. Packagers can override it by
'make GIT_REVISION=foo'.
Update Nouveau to use GIT_REVISION, if defined, instead of DRIVER_DATE
in struct drm_driver.
Signed-off-by: Pekka Paalanen <pq@iki.fi>
This is the correct fix for the RS690 and hopefully the dma coherent work.
For now we limit everybody to a 32-bit DMA mask but it is possible for
RS690 to use a 40-bit DMA mask for the GART table itself,
and the PCIE cards can use 40-bits for the table entries.
Signed-off-by: Dave Airlie <airlied@redhat.com>
The i915_vblank_swap() function schedules an automatic buffer swap
upon receipt of the vertical sync interrupt. Such an operation is
lengthy so it can't be allowed to happen in normal interrupt context,
thus the DRM implements this by scheduling the work in a kernel
softirq-scheduled tasklet. In order for the buffer swap to work
safely, the DRM's central lock must be taken, via a call to
drm_lock_take() located in drivers/char/drm/drm_irq.c within the
function drm_locked_tasklet_func(). The lock-taking logic uses a
non-interrupt-blocking spinlock to implement the manipulations needed
to take the lock. This semantic would be safe if all attempts to use
the spinlock only happen from process context. However this buffer
swap happens from softirq context which is really a form of interrupt
context. Thus we have an unsafe situation, in that
drm_locked_tasklet_func() can block on a spinlock already taken by a
thread in process context which will never get scheduled again because
of the blocked softirq tasklet. This wedges the kernel hard.
To trigger this bug, run a dual-head cloned mode configuration which
uses the i915 drm, then execute an opengl application which
synchronizes buffer swaps against the vertical sync interrupt. In my
testing, a lockup always results after running anywhere from 5 minutes
to an hour and a half. I believe dual-head is needed to really
trigger the problem because then the vertical sync interrupt handling
is no longer predictable (due to being interrupt-sourced from two
different heads running at different speeds). This raises the
probability of the tasklet trying to run while the userspace DRI is
doing things to the GPU (and manipulating the DRM lock).
The fix is to change the relevant spinlock semantics to be the
interrupt-blocking form. After this change I am no longer able to
trigger the lockup; the longest test run so far was 20 hours (test
stopped after that point).
Note: I have examined the places where this spinlock is being
employed; all are reasonably short bounded sequences and should be
suitable for interrupts being blocked without impacting overall kernel
interrupt response latency.
Signed-off-by: Mike Isely <isely@pobox.com>
Conflicts:
linux-core/drm_compat.c
linux-core/drm_compat.h
linux-core/drm_ttm.c
shared-core/i915_dma.c
Bump driver minor to 13 due to introduction of new
relocation type.
This patch fixes bits of the DRM so to make the radeon DRI work on
non-cache coherent PCI DMA variants of the PowerPC processors.
It moves the few places that needs change to wrappers to that
other architectures with similar issues can easily add their
own changes to those wrappers, at least until we have more useful
generic kernel API.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>