Use normalized UV coordinates for SurfaceItem

Relying on the texture matrix to normalize means we multiply every UV
coordinate with 1/scale, which leads to floating point errors and thus
errors in the UV coordinates. Instead, if we calculate normalized
coordinates directly we avoid floating point error and get proper UV
coordinates.

Longer term the plan is to make all UV coordinates normalized and get
rid of the CoordinateType altogether.
master
Arjen Hiemstra 2 years ago
parent 7292af3d04
commit 2f4fa23e61

@ -376,7 +376,7 @@ void SceneOpenGL::createRenderNode(Item *item, RenderContext *context)
.transformMatrix = context->transformStack.top(),
.opacity = context->opacityStack.top(),
.hasAlpha = hasAlpha,
.coordinateType = UnnormalizedCoordinates,
.coordinateType = NormalizedCoordinates,
});
}
}

@ -140,10 +140,12 @@ WindowQuadList SurfaceItem::buildQuads() const
const QPointF bufferBottomRight = m_surfaceToBufferMatrix.map(rect.bottomRight());
const QPointF bufferBottomLeft = m_surfaceToBufferMatrix.map(rect.bottomLeft());
quad[0] = WindowVertex(rect.topLeft(), bufferTopLeft);
quad[1] = WindowVertex(rect.topRight(), bufferTopRight);
quad[2] = WindowVertex(rect.bottomRight(), bufferBottomRight);
quad[3] = WindowVertex(rect.bottomLeft(), bufferBottomLeft);
const auto size = m_pixmap->size();
quad[0] = WindowVertex(rect.topLeft(), QPointF{bufferTopLeft.x() / size.width(), bufferTopLeft.y() / size.height()});
quad[1] = WindowVertex(rect.topRight(), QPointF{bufferTopRight.x() / size.width(), bufferTopRight.y() / size.height()});
quad[2] = WindowVertex(rect.bottomRight(), QPointF{bufferBottomRight.x() / size.width(), bufferBottomRight.y() / size.height()});
quad[3] = WindowVertex(rect.bottomLeft(), QPointF{bufferBottomLeft.x() / size.width(), bufferBottomLeft.y() / size.height()});
quads << quad;
}

Loading…
Cancel
Save