The mm_lock function is used when leaving vt. It evicts _all_ buffers.
Buffers with the DRM_BO_NO_MOVE attribute set will be guaranteed to
get the same offset when / if they are rebound.
Ok, I lied before.. it was a fluke it worked and required magic to repeat it..
It actually helps to fill in RAMFC entries in the correct place.
The code also clears RAMIN entirely instead of just the hash-table.
Fix buffer bound caching policy changing, Allow
on-the-fly changing of caching policy on bound buffers if the hardware
supports it.
Allow drivers to use driver-specific AGP memory types for TTM AGP pages.
Will make AGP drivers much easier to migrate.
Adapt for new functions in the 2.6.19 kernel.
Remove the ability to have multiple regions in one TTM.
This simplifies a lot of code.
Remove the ability to access TTMs from user space.
We don't need it anymore without ttm regions.
Don't change caching policy for evicted buffers. Instead change it only
when the buffer is accessed by the CPU (on the first page fault).
This tremendously speeds up eviction rates.
Current code is safe for kernels <= 2.6.14.
Should also be OK with 2.6.19 and above.
Added preliminary support for context switches (triggers the interrupts, but hangs after the switch ; something's not quite right yet).
Removed the PFIFO_REINIT ioctl. I hope it's that a good idea...
Requires the upcoming commit to the DDX.
mach64_state.c: convert the DRM_MACH64_BLIT ioctl to submit a pointer to
user-space memory rather than a DMA buffer index, similar to DRM_MACH64_VERTEX.
This change allows the DDX to map the DMA buffers read-only and eliminate a
security problem where a client can alter the contents of the DMA buffer after
submission to the DRM.
This change also affects the DRI/DRM interface. Performace-wise, it basically
affects PCI mode where I get a ~12% speedup for some Mesa demos I tested.
This is mainly due to eliminating an ioctl for allocating the DMA buffer.
mach64_dma.c: move the responsibility for allocating memory for the DMA ring
in PCI mode to the DDX.
This change affects the DDX/DRM interface and unifies a couple of PCI/AGP code
paths for ring memory in the DRM.
Bump the mach64 DRM version major and date.
Map the DMA buffers from the same linear address as the vertex bufs. If
dev->agp_buffer_token is not set, mach64 drm maps the DMA buffers from
linear address 0x0.
Initialize the spinlock unconditionally when struct drm_device is filled in,
and return early in drm_locked_tasklet() if the driver doesn't support IRQs.