graphics objects:
- No longer takes flags/dmaobj parameters, requires some major changes
to the ddx to setup the object through the FIFO. This change is
likely to cause breakages on some cards (tested on NV05,NV28,NV35,
NV40 and NV4E).
dma objects:
- now takes a "class" parameter, not really used yet but we may need
it at some point.
- parameters are checked, so clients can't randomly create DMA objects
pointing at whatever they feel like.
misc:
- Added FB_SIZE/AGP_SIZE getparams
- Read PFIFO_INTR in PFIFO irq handler, not PMC_INTR
- Dump PGRAPH trap info on PGRAPH_INTR_NOTIFY if NSOURCE isn't
NOTIFICATION_PENDING.
Add refcounting of user waiters to the DRM hardware lock, so that we can use the
DRM_LOCK_CONT flag more conservatively.
Also add a kernel waiter refcount that if nonzero transfers the lock for the kernel context,
when it is released. This is useful when waiting for idle and can be used
for very simple fence object driver implementations for the new memory manager.
It also resolves the AIGLX startup deadlock for the sis and the via drivers.
i810, i830 still require that the hardware lock is really taken so the deadlock remains
for those two. I'm not sure about ffb. Anyone familiar with that code?
The MI_WAIT_FOR_EVENT instruction does not support waiting for several events
at once, so this should fix the lockups with page flipping when both pipes are
enabled.
Always use dev_priv->sarea_priv->pf_current_page directly. This allows clients
to modify it as well while they hold the HW lock, e.g. in order to sync pages
between pipes.
The assumption is that synchronous flips are not isolated usually, and waiting
for all of them could result in stalling the pipeline for long periods of time.
Also use i915_emit_mi_flush() instead of an old-fashioned way to achieve the
same effect.
Unfortunately, emitting asynchronous flips during vertical blank results in
tearing. So we have to wait for the previous vertical blank and emit a
synchronous flip.
Leave it to the client to wait for the flip to complete when necessary,
but wait for a previous flip to complete before emitting another one. This
should help avoid unnecessary stalling of the ring due to pending flips.
Call i915_do_cleanup_pageflip() unconditionally in preclose.
Memory types are either fixed (on-card or pre-bound AGP) or not fixed
(dynamically bound) to an aperture. They also carry information about:
1) Whether they can be mapped cached.
2) Whether they are at all mappable.
3) Whether they need an ioremap to be accessible from kernel space.
In this way VRAM memory and, for example, pre-bound AGP appear
identical to the memory manager.
This also makes support for unmappable VRAM simple to implement.
Hook into nv20 pgraph switching functions (they're identical for nv3x).
Actually call nv30_pgraph_context_init so the ctx_table is allocated.
Thanks to Carlos Martin for the help.
This means the loop will wait up to ~10ms for ring buffer space to become
available, rather than just however long it takes to check the space 10000
times. This matches other drivers' behavior when waiting for ring buffer/fifo
space.
* Pulled in some registers from nv10reg.h. Needed for context switching.
* Filled in nv30 graphics context (based on nv40_graph.c).
* Figure out nv30 context table, set up on context creation. Allows the cards automatic switching to work.
The overflows could lead to the AGP aperture overlapping the framebuffer area
in the card's address space when the latter is located at the very end of the
32 bit address space, which would result in a freeze on X server startup,
probably because the card read commands from the framebuffer instead of from
AGP.
See http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=392915 .
Now I can get 3D + working grctx switching on my NV40 without
the binary driver initialising the card first. However, this
change also breaks 3D on my C51 even *with* the binary driver's
help. So, it's likely that the weird voodoo is card-specific.
This is enough to get grctx switching going on my NV40 and C51 after
the binary driver has initialised the card first.
Bumping the drm patchlevel because the ddx needs some modifications to
have NV4x work at all with these changes.
With this change the GPU is responsible for doing the channel switch
itself. This is needed for the upcoming NV4x PGRAPH context work as
we don't yet know enough to manually swap PGRAPH contexts.
Replace r300_check_offset() with generic radeon_check_offset(), which doesn't
reject valid offsets when the framebuffer area is at the very end of the card's
32 bit address space. Make radeon_check_and_fixup_offset() use
radeon_check_offset() as well.
This fixes https://bugs.freedesktop.org/show_bug.cgi?id=7697 .
The current version didn't build on BSD, where the new functionality isn't used
yet anyway. Whoever changes that will hopefully be able to make the OSes share
this file as well.
Previously, if there were several buffer swaps scheduled for the same vertical
blank, all but the first blit emitted stood a chance of exhibiting tearing. In
order to avoid this, split the blits along slices of each output top to bottom.
- Do important card init in firstopen
- Give each channel it's own cmdbuf dma object
- Move RAMHT config state to the same place as RAMRO/RAMFC
- Make sure instance mem for objects is *after* RAM{FC,HT,RO}
On X init, PFIFO and PGRAPH are reset to defaults. This causes the GPU to
loose the configuration done by the drm. Perhaps a CARD_INIT ioctl a proper
solution to having this problem again in the future..
This will come in very handy for tiled buffers on intel hardware.
Also add some padding to interface structures to allow future binary backwards
compatible changes.
Ok, I lied before.. it was a fluke it worked and required magic to repeat it..
It actually helps to fill in RAMFC entries in the correct place.
The code also clears RAMIN entirely instead of just the hash-table.
Fix buffer bound caching policy changing, Allow
on-the-fly changing of caching policy on bound buffers if the hardware
supports it.
Allow drivers to use driver-specific AGP memory types for TTM AGP pages.
Will make AGP drivers much easier to migrate.
Adapt for new functions in the 2.6.19 kernel.
Remove the ability to have multiple regions in one TTM.
This simplifies a lot of code.
Remove the ability to access TTMs from user space.
We don't need it anymore without ttm regions.
Don't change caching policy for evicted buffers. Instead change it only
when the buffer is accessed by the CPU (on the first page fault).
This tremendously speeds up eviction rates.
Current code is safe for kernels <= 2.6.14.
Should also be OK with 2.6.19 and above.
Added preliminary support for context switches (triggers the interrupts, but hangs after the switch ; something's not quite right yet).
Removed the PFIFO_REINIT ioctl. I hope it's that a good idea...
Requires the upcoming commit to the DDX.
mach64_state.c: convert the DRM_MACH64_BLIT ioctl to submit a pointer to
user-space memory rather than a DMA buffer index, similar to DRM_MACH64_VERTEX.
This change allows the DDX to map the DMA buffers read-only and eliminate a
security problem where a client can alter the contents of the DMA buffer after
submission to the DRM.
This change also affects the DRI/DRM interface. Performace-wise, it basically
affects PCI mode where I get a ~12% speedup for some Mesa demos I tested.
This is mainly due to eliminating an ioctl for allocating the DMA buffer.
mach64_dma.c: move the responsibility for allocating memory for the DMA ring
in PCI mode to the DDX.
This change affects the DDX/DRM interface and unifies a couple of PCI/AGP code
paths for ring memory in the DRM.
Bump the mach64 DRM version major and date.
Map the DMA buffers from the same linear address as the vertex bufs. If
dev->agp_buffer_token is not set, mach64 drm maps the DMA buffers from
linear address 0x0.
This fixes issues on X server startup with versions of xf86-video-intel that
enable the IRQ before they have a context ID.
(cherry picked from 7af93dd984 commit)
It looks like 'after a while', I915REG_INT_IDENTITY_R for some reason always has
VSYNC_PIPEB_FLAG set in the interrupt handler, even though pipe B is disabled.
So we only increase dev->vbl_received if the corresponding bit is also set in
dev->vblank_pipe.
(cherry picked from 881ba56992 commit)
When this flag is set and the target sequence is missed, wait for the next
vertical blank instead of returning immediately.
(cherry picked from 89e323e490 commit)