Add a mechanism by which we can evade the leading edge of vblank. This guarantees that no two sprite register writes will straddle on either side of the vblank start, and that means all the writes will be latched together in one atomic operation. We do the vblank evade by checking the scanline counter, and if it's too close to the start of vblank (too close has been hardcoded to 100usec for now), we will wait for the vblank start to pass. In order to eliminate random delayes from the rest of the system, we operate with interrupts disabled, except when waiting for the vblank obviously. Note that we now go digging through pipe_to_crtc_mapping[] in the vblank interrupt handler, which is a bit dangerous since we set up interrupts before the crtcs. However in this case since it's the vblank interrupt, we don't actually unmask it until some piece of code requests it. v2: preempt_check_resched() calls after local_irq_enable() (Jesse) Hook up the vblank irq stuff on BDW as well v3: Pass intel_crtc instead of drm_crtc (Daniel) Warn if crtc.mutex isn't locked (Daniel) Add an explicit compiler barrier and document the barriers (Daniel) Note the irq vs. modeset setup madness in the commit message (Daniel) v4: Use prepare_to_wait() & co. directly and eliminate vbl_received v5: Refactor intel_pipe_handle_vblank() vs. drm_handle_vblank() (Chris) Check for min/max scanline <= 0 (Chris) Don't call intel_pipe_update_end() if start failed totally (Chris) Check that the vblank counters match on both sides of the critical section (Chris) v6: Fix atomic update for interlaced modes v7: Reorder code for better readability (Chris) v8: Drop preempt_check_resched(). It's not available to modules anymore and isn't even needed unless we ourselves cause a wakeup needing reschedule while interrupts are off Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org> Reviewed-by: Sourab Gupta <sourabgupta@gmail.com> Reviewed-by: Akash Goel <akash.goels@gmail.com> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
************************************************************ * For the very latest on DRI development, please see: * * http://dri.freedesktop.org/ * ************************************************************ The Direct Rendering Manager (drm) is a device-independent kernel-level device driver that provides support for the XFree86 Direct Rendering Infrastructure (DRI). The DRM supports the Direct Rendering Infrastructure (DRI) in four major ways: 1. The DRM provides synchronized access to the graphics hardware via the use of an optimized two-tiered lock. 2. The DRM enforces the DRI security policy for access to the graphics hardware by only allowing authenticated X11 clients access to restricted regions of memory. 3. The DRM provides a generic DMA engine, complete with multiple queues and the ability to detect the need for an OpenGL context switch. 4. The DRM is extensible via the use of small device-specific modules that rely extensively on the API exported by the DRM module. Documentation on the DRI is available from: http://dri.freedesktop.org/wiki/Documentation http://sourceforge.net/project/showfiles.php?group_id=387 http://dri.sourceforge.net/doc/ For specific information about kernel-level support, see: The Direct Rendering Manager, Kernel Support for the Direct Rendering Infrastructure http://dri.sourceforge.net/doc/drm_low_level.html Hardware Locking for the Direct Rendering Infrastructure http://dri.sourceforge.net/doc/hardware_locking_low_level.html A Security Analysis of the Direct Rendering Infrastructure http://dri.sourceforge.net/doc/security_low_level.html