Ok, I lied before.. it was a fluke it worked and required magic to repeat it..
It actually helps to fill in RAMFC entries in the correct place.
The code also clears RAMIN entirely instead of just the hash-table.
Fix buffer bound caching policy changing, Allow
on-the-fly changing of caching policy on bound buffers if the hardware
supports it.
Allow drivers to use driver-specific AGP memory types for TTM AGP pages.
Will make AGP drivers much easier to migrate.
Adapt for new functions in the 2.6.19 kernel.
Remove the ability to have multiple regions in one TTM.
This simplifies a lot of code.
Remove the ability to access TTMs from user space.
We don't need it anymore without ttm regions.
Don't change caching policy for evicted buffers. Instead change it only
when the buffer is accessed by the CPU (on the first page fault).
This tremendously speeds up eviction rates.
Current code is safe for kernels <= 2.6.14.
Should also be OK with 2.6.19 and above.
Added preliminary support for context switches (triggers the interrupts, but hangs after the switch ; something's not quite right yet).
Removed the PFIFO_REINIT ioctl. I hope it's that a good idea...
Requires the upcoming commit to the DDX.
mach64_state.c: convert the DRM_MACH64_BLIT ioctl to submit a pointer to
user-space memory rather than a DMA buffer index, similar to DRM_MACH64_VERTEX.
This change allows the DDX to map the DMA buffers read-only and eliminate a
security problem where a client can alter the contents of the DMA buffer after
submission to the DRM.
This change also affects the DRI/DRM interface. Performace-wise, it basically
affects PCI mode where I get a ~12% speedup for some Mesa demos I tested.
This is mainly due to eliminating an ioctl for allocating the DMA buffer.
mach64_dma.c: move the responsibility for allocating memory for the DMA ring
in PCI mode to the DDX.
This change affects the DDX/DRM interface and unifies a couple of PCI/AGP code
paths for ring memory in the DRM.
Bump the mach64 DRM version major and date.
Map the DMA buffers from the same linear address as the vertex bufs. If
dev->agp_buffer_token is not set, mach64 drm maps the DMA buffers from
linear address 0x0.
Initialize the spinlock unconditionally when struct drm_device is filled in,
and return early in drm_locked_tasklet() if the driver doesn't support IRQs.
This fixes issues on X server startup with versions of xf86-video-intel that
enable the IRQ before they have a context ID.
(cherry picked from 7af93dd984 commit)
It looks like 'after a while', I915REG_INT_IDENTITY_R for some reason always has
VSYNC_PIPEB_FLAG set in the interrupt handler, even though pipe B is disabled.
So we only increase dev->vbl_received if the corresponding bit is also set in
dev->vblank_pipe.
(cherry picked from 881ba56992 commit)
It looks like this would have caused signals to always get sent on the next
vertical blank, regardless of the sequence number.
(cherry picked from cf6b2c5299 commit)
When this flag is set and the target sequence is missed, wait for the next
vertical blank instead of returning immediately.
(cherry picked from 89e323e490 commit)