This creates a default group attached to the legacy drm minor nodes.
It covers all the objects in the set. make set resources only return
objects for this set. Need to fix up other functions to only work on
objects in their allowed set.
Okay we have crtc, encoder and connectors.
No more outputs exposed beyond driver internals
I've broken intel tv connector stuff.
Really for TV we should have one TV connector, with a sub property for the
type of signal been driven over it
Use subclassing from the drivers to allocate the objects. This saves
two objects being allocated for each crtc/output and generally makes
exit paths cleaner.
This splits a lot of the core modesetting code out into a file of
helper functions, that are only called from themselves and/or the driver.
The driver gets called into more often or can call these functions from itself
if it is a helper using driver.
I've broken framebuffer resize doing this but I didn't like the API for that
in any case.
Object domain transfer can involve adding flush ops to the request queue,
and so the DRM lock must be held to avoid having the X server smash pointers
badly.
The interrupt enable register cannot be used to temporarily disable
interrupts, instead use the interrupt mask register.
Note that this change means that a pile of buffers will be left stuck on the
chip as the final interrupts will not be recognized to come and drain things.
Add code to get panel modes from the VBIOS if present and check whether certain
outputs exist. Should make our display detection code a little more robust.
When reading from multiple domains, allow each cache to continue
to hold data until writes occur somewhere. This is done by
first leaving the read_domains alone at bind time (presumably the CPU read
cache contains valid data still) and then in set_domain, if no write_domain
is specified, the new read domains are simply merged into the existing read
domains.
A huge comment was added above set_domain to explain how things are
expected to work.
Newly allocated objects need to be in the CPU domain as they've just been
cleared by the CPU. Also, unmapping objects from the GTT needs to put them
into the CPU domain, both to flush rendering as well as to ensure that any
paging action gets flushed before we remap to the GTT.
Commands in the ring are parsed and started when the head pointer passes by
them, but they are not necessarily finished until a MI_FLUSH happens. This
patch inserts a flush after the execbuffer (the only place a flush wasn't
already happening).
There are now 3 lists. Active is buffers currently in the ringbuffer.
Flushing is not in the ringbuffer, but needs a flush before unbinding.
Inactive is as before. This prevents object_free → unbind →
wait_rendering → object_reference and a kernel oops about weird refcounting.
This also avoids an synchronous extra flush and wait when freeing a buffer
which had a write_domain set (such as a temporary rendered to and then from
using the 2d engine). It will sit around on the flushing list until the
appropriate flush gets emitted, or we need the GTT space for another
operation.
The dummy read page will point to NULL if drm_bo_driver_init failed at
firstopen (modeset is not enabled), and will cause kernel oops at
subsequent drm_lastclose call, so be sure to check it.
This lets us get some qualities we desire, such as using the full 32-bit
range (except zero), avoiding DRM_WAIT_ON, and a 1:1 mapping of active
sequence numbers to request structs, which will be used soon for throttling
and interrupt-driven list cleanup.
Additionally, a boolean active field is added to indicate which list an
object is on, rather than smashing last_rendering_cookie to 0 to show
inactive. This will help with flush-reduction later on, and makes the code
clearer.
It would be nice if one day the DRM driver was the canonical source for
register definitions and core macros. To that end, this patch cleans
things up quite a bit, removing redundant definitions (some with
different names referring to the same register) and generally tidying up
the header file.
In order to avoid recursive ->detect->interrupt->detect->interrupt->...
we need to disable TV hotplug interrupts in
intel_tv.c:intel_tv_detect_type. We also need to enable the TV interrupt
detection and hotplug sequence properly in i915_irq.c.
No need to fill the ring that much; wait for it to become nearly empty
before adding the execbuffer request. A better fix will involve scheduling
ring insertion in the irq handler.
drm_crtc->fb may point to NULL, f.e X server will allocate a new fb
and assign it to the CRTC at startup, when X server exits, it will destroy
the allocated fb, making drm_crtc->fb points to NULL.
pread and pwrite must update the memory domains to ensure consistency with
the GPU. At some point, it should be possible to avoid clflush through this
path, but that isn't working for me.
The exec list contains all objects, in order of use. The lru list contains
only unpinned objects ready to be evicted. This required two changes -- the
first was to not migrate pinned objects from exec to lru, the second was to
search for the first unpinned object in the exec list when doing eviction.
Now, the LRU list has objects that are completely done rendering and ready
to kick out, while the execution list has things with active rendering,
which have associated cookies and reference counts on them.
Even if the TV encoder hasn't been fused off, we may not have a TV connector on
the platform. The BDB in the BIOS should give us this info in some cases.
Domain information is about buffer relationships, not buffer contents. That
means a relocation contains the domain information as it knows how the
source buffer references the target buffer.
This also adds the set_domain ioctl so that user space can move buffers to
the cpu domain.
This should already have been generally safe since we don't change contents
and put in new relocations between execbufs, so if we were writing in a new
relocation then we'd already waited rendering to complete when we moved
the target of the relocation. However, doing the right thing will be required
if we do buffer reuse.
The kernel has removed nopage so move the old nopage codepaths into a compat vm file and switch to using the fault paths.
nopfn is on its way out in the future also, so we should switch to using fault
for that path as well soon
If objects on the lru aren't ref counted, they'll get pulled from the gtt as
soon as they are freed. This change does cause objects to get stuck in the
gtt until they're forced out by new requests. The lru should get cleaned
when the irq occurs.
pages come back from find_or_create_page locked, but must not stay locked
for long. Unlock them immediately instead of waiting until we're done with
them to avoid deadlock when applications try to touch them.
Track named objects in /proc/dri/0/gem_names.
Track total object count in /proc/dri/0/gem_objects.
Initialize device gem data.
return -ENODEV for gem ioctls if the driver doesn't support gem.
Call unlock_page when unbinding from gtt.
Add numerous misssing calls to drm_gem_object_unreference.
Names are just another unique integer set (from another idr object).
Names are removed when the user refernces (handles) are all destroyed --
this required that handles for objects be counted separately from
internal kernel references (so that we can tell when the handles are all
gone).
mixed 32/64 bit systems need 'special' help for ioctl where the user-space
and kernel-space datatypes differ. Fixing the datatypes to be the same size,
and align the same way for both 32 and 64-bit ppc and x86 environments will
elimiante the need to have magic 32/64-bit ioctl translation code.
It's not really a graphics memory allocator, just something to track ranges
of address space. It doesn't involve actual allocation, and was consuming
some desired namespace.
This tries to automatically fetch a git revision string and if succeeds,
it #defines GIT_REVISION string macro. Packagers can override it by
'make GIT_REVISION=foo'.
Update Nouveau to use GIT_REVISION, if defined, instead of DRIVER_DATE
in struct drm_driver.
Signed-off-by: Pekka Paalanen <pq@iki.fi>
TV out needs to do load detection, which means we have to find an
available pipe to use for the detection. Port over the pipe reservation
code for this purpose.
Put off registering new outputs with sysfs until they're properly configured,
or we may get duplicates if the type hasn't been set yet (as is the case with
SDVO initialization). This also means moving de-registration into the cleanup
function instead of output destroy, since the latter occurs during the normal
course of setup when an output isn't found (and therefore not registered with
sysfs yet.
This patch ties outputs, output properties and hotplug events into the
DRM core. Each output has a corresponding directory under the primary
DRM device (usually card0) containing dpms, edid, modes, and connection
status files.
New hotplug change events occur when outputs are added or hotplug events
are detected.
This is the correct fix for the RS690 and hopefully the dma coherent work.
For now we limit everybody to a 32-bit DMA mask but it is possible for
RS690 to use a 40-bit DMA mask for the GART table itself,
and the PCIE cards can use 40-bits for the table entries.
Signed-off-by: Dave Airlie <airlied@redhat.com>
The i915_vblank_swap() function schedules an automatic buffer swap
upon receipt of the vertical sync interrupt. Such an operation is
lengthy so it can't be allowed to happen in normal interrupt context,
thus the DRM implements this by scheduling the work in a kernel
softirq-scheduled tasklet. In order for the buffer swap to work
safely, the DRM's central lock must be taken, via a call to
drm_lock_take() located in drivers/char/drm/drm_irq.c within the
function drm_locked_tasklet_func(). The lock-taking logic uses a
non-interrupt-blocking spinlock to implement the manipulations needed
to take the lock. This semantic would be safe if all attempts to use
the spinlock only happen from process context. However this buffer
swap happens from softirq context which is really a form of interrupt
context. Thus we have an unsafe situation, in that
drm_locked_tasklet_func() can block on a spinlock already taken by a
thread in process context which will never get scheduled again because
of the blocked softirq tasklet. This wedges the kernel hard.
To trigger this bug, run a dual-head cloned mode configuration which
uses the i915 drm, then execute an opengl application which
synchronizes buffer swaps against the vertical sync interrupt. In my
testing, a lockup always results after running anywhere from 5 minutes
to an hour and a half. I believe dual-head is needed to really
trigger the problem because then the vertical sync interrupt handling
is no longer predictable (due to being interrupt-sourced from two
different heads running at different speeds). This raises the
probability of the tasklet trying to run while the userspace DRI is
doing things to the GPU (and manipulating the DRM lock).
The fix is to change the relevant spinlock semantics to be the
interrupt-blocking form. After this change I am no longer able to
trigger the lockup; the longest test run so far was 20 hours (test
stopped after that point).
Note: I have examined the places where this spinlock is being
employed; all are reasonably short bounded sequences and should be
suitable for interrupts being blocked without impacting overall kernel
interrupt response latency.
Signed-off-by: Mike Isely <isely@pobox.com>
Conflicts:
linux-core/drm_compat.c
linux-core/drm_compat.h
linux-core/drm_ttm.c
shared-core/i915_dma.c
Bump driver minor to 13 due to introduction of new
relocation type.
This patch fixes bits of the DRM so to make the radeon DRI work on
non-cache coherent PCI DMA variants of the PowerPC processors.
It moves the few places that needs change to wrappers to that
other architectures with similar issues can easily add their
own changes to those wrappers, at least until we have more useful
generic kernel API.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
PCI- or high memory.
This is substantially more efficient than drm_bo_kmap,
since the mapping only lives on a single processor.
Unmapping is done use kunmap_atomic(). Flushes only a single tlb() entry.
Add a support utility int drm_bo_pfn_prot() that returns the
pfn and desired page protection for a given bo offset.
This is all intended for relocations in bound TTMS or vram.
Mapping-accessing-unmapping must be atomic, either using preempt_xx() macros
or a spinlock.
PCI- or high memory.
This is substantially more efficient than drm_bo_kmap,
since the mapping only lives on a single processor.
Unmapping is done use kunmap_atomic(). Flushes only a single tlb() entry.
Add a support utility int drm_bo_pfn_prot() that returns the
pfn and desired page protection for a given bo offset.
This is all intended for relocations in bound TTMS or vram.
Mapping-accessing-unmapping must be atomic, either using preempt_xx() macros
or a spinlock.
This change adds a driver feature that for i915 is controlled by a module
parameter. You now need to do insmod i915.ko modeset=1 to enable it the
modesetting paths.
It also fixes up lots of X paths. I can run my new DDX driver on this code
with and without modesetting enabled
Allow mode to be set with fb_id set to -1, meaning set
the mode with the current fb (if we have one bound).
Allow intelfb to hook back up it's fb if modesetting
clears it (maybe temporary).
Move any crtc->fb related register changes to set_base
in intel_fb.
General intelfb cleanups.
This fixes at least two problems:
* The vblank_disable_fn timer callback could get called after the DRM was
de-initialized, e.g. on X server shutdown.
* Leak of vblank related resources when disabling and re-enabling the IRQ, e.g.
on an X server reset.
On many chipsets, the checks for DPLL enable or VGA mode will prevent the
pipeconf regs from being restored, which could result in a blank display or X
failing to come back after resume. So restore them unconditionally along with
actually restoring pipe B's palette correctly.
On resume, if the interrupt state isn't restored correctly, we may end
up with a flood of unexpected or ill-timed interrupts, which could cause
the kernel to disable the interrupt or vblank events to happen at the
wrong time. So save/restore them properly.
There were two problems with the existing callback code: the vblank
enable callback happened multiple times per disable, making drivers more
complex than they had to be, and there was a race between the final
decrement of the vblank usage counter and the next enable call, which
could have resulted in a put->schedule disable->get->enable->disable
sequence, which would be bad.
So add a new vblank_enabled array to track vblank enable on per-pipe
basis, and add a lock to protect it along with the refcount +
enable/disable calls to fix the race.
sequence number may actually turn up before the corresponding fence
object has been queued on the ring.
Fence drivers can use this member to determine whether a
sequence number must be re-reported.
In hibernate, we may end up calling the VGA save regs function twice, so we
need to make sure it's idempotent. That means leaving ARX in index mode after
the first save operation. Fixes hibernate on 965.
i915_flush_ttm was unconditionally executing a clflush instruction
to (obviously) flush the cache. Instead, check if the cpu supports
clflush, and if not, fall back to calling wbinvd to flush the entire
cache.
Signed-off-by: Kyle McMartin <kmcmartin@redhat.com>
(1 << bits) is an undefined value when bits == 32.
gcc may generate 1 with this expression
which will lead to an infinite retry loop in
drm_ht_just_insert_please.
Because of the different implement of hash_long,
this issue is more frequenly see on 64 bit system
As DRM_DEBUG macro already prints out the __FUNCTION__ string (see
drivers/char/drm/drmP.h), it is not worth doing this again. At some
other places the ending "\n" was added.
airlied:- I cleaned up a few that this patch missed also
I couldn't figure out what drm_bo_type_dc was for; Dave Airlie finally clued
me in that it was the 'normal' buffer objects with kernel allocated pages
that could be mmapped from the drm device file.
I thought that 'drm_bo_type_device' was a more descriptive name.
I also added a bunch of comments describing the use of the type enum values and
the functions that use them.
Flags pending validation were stored in a misleadingly named field, 'mask'.
As 'mask' is already used to indicate pieces of a flags field which are
changing, it seems better to use a name reflecting the actual purpose of
this field. I chose 'proposed_flags' as they may not actually end up in
'flags', and in an case will be modified when they are moved over.
This affects the API, but not ABI of the user-mode interface.
Previously, dummy_read_page was used only for read-only user allocations; it
filled in pages that were not present in the user address map (presumably,
these were allocated but never written to pages).
This patch allows them to be used for read-only ttms allocated from the
kernel, so that applications can over-allocate buffers without forcing every
page to be allocated.
I'm hoping to use the dummy_read_page for kernel allocated buffers to avoid
allocating extra pages for read-only buffers (like vertex and batch buffers).
This also eliminates the 'write' parameter to drm_ttm_set_user and just
has DRM_TTM_PAGE_WRITE passed into drm_ttm_create.
Aside from changing drm_bind_ttm to drm_ttm_bind, this patch
adds only documentation and fixes the functions inside drm_ttm.c
to all be prefixed with drm_ttm_.
drmBOSetStatus does not bother to set the fence_class parameter.
Fortunately, drm_bo_setstatus_ioctl doesn't end up using it as it
calls drm_bo_handle_validate with use_old_fence_class = 1.
Document parameters and usage for drm_bo_handle_validate. Change parameter
order to match drm_bo_do_validate (fence_class has been moved to after
flags, hint and mask values). Existing users of this function have been
changed, but out-of-tree users must be modified separately.
Add comments about the parameters to drm_bo_do_validate, along
with comments for the DRM_BO_HINT options. Remove the 'do_wait'
parameter as it is duplicated by DRM_BO_HINT_DONT_BLOCK.
Creating a ttm was done with drm_ttm_init while destruction was done with
drm_destroy_ttm. Renaming these to drm_ttm_create and drm_ttm_destroy makes
their use clearer. Passing page_flags to the create function will allow that
to know whether user or kernel pages are needed, with the goal of allowing
kernel ttms to be saved for later reuse.
This fix is actually a bit of a cleanup too--it moves lock freeing to
drm_rmmap_locked and out of drm_lastclose. This makes it symmetrical with
addmap and also prevents the lock from being incorrectly freed from driver
mappings.
This should work on all radeon but there is still many things todo:
- add crtc2
- tmds
- lvds
- add bios data table so we don't need to hardcode dac/crtc infos
- separate clock control to make power saving easier & cleaner
- tiling (warning tiling shouldn't be enable in double scan or interlace)
- surface reg manager (this goes along with tiling)
- suspend/resume hook
- avivo & r500 family support
- atom bios support (for posting card mostly)
- finish superioctl skeleton
- what else ? :)
so really want to get a list of modes per output not the global hammer list.
also we remove the mode ids and let the user pass back the full mode description
need to fix up add/remove mode for user modes now
This allow the user to retrieve a list of properties for an output.
Properties can either be 32-bit values or an enum with an associated name.
Range properties are to be supported.
This API is probably not all correct, I may make properties part of the general
resource get when I think about it some more.
So basically you can create properties and attached them to whatever outputs you want,
so it should be possible to create some generics and just attach them to every output.
Since the drm_mode_card_res structure contains user pointers, we have to use
put_user and copy_to_user to write stuff out. The DRM ioctl wrapper will only
take care of copying the base drm_mode_card_res struct, not the included
arrays.
This header file is shared across linux and bsd, but is not installed
for user space to access. It's the place to put prototypes and data
types that aren't platform or chipset specific, but still internal to
the drm.
This way driver can get_edid in output status detection
(using all workaround which are in get_edid) and then provide
this edid data in get_mode callback of output.
Conflicts:
linux-core/Makefile.kernel
linux-core/drm_stub.c
linux-core/i915_drv.c
shared-core/i915_dma.c
shared-core/i915_drv.h
Fixup suspend/resume conflicts (basically use what's in DRM master for now).
Also fix up a few other conflicts that snuck in (i915_dma changes etc.).
The vblank_init function wanted a couple of cleanups.
Also, drm_irq_install wasn't checking the new return value of irq_postinstall.
If it returns a failure, assume IRQs didn't get set up and take appropriate
action.