I’m attempting to implement an interactive picture stretching impact in SwiftUI, the place customers can drag 4 nook factors, and the picture inside ought to stretch accordingly, following the trail created by these factors.
Presently, I’m making use of a CIPerspectiveTransform to warp the picture based mostly on the up to date factors. Nonetheless, the problem is that the picture doesn’t stretch precisely as anticipated—it both will get clipped or doesn’t totally conform to the brand new quadrilateral form.
What I Have Carried out
- A draggable quadrilateral form the place customers can transfer the corners.
- A CIPerspectiveTransform to remodel the picture based mostly on the up to date factors.
- The picture is meant to stretch dynamically because the person strikes the factors.
Problem
- The picture doesn’t at all times match contained in the quadrilateral; typically, components transcend or distort incorrectly.
- I want to make sure that regardless of how far a degree is moved, the picture stretches precisely alongside the altering quadrilateral form.
Present Code
import SwiftUI
import CoreImage
import CoreImage.CIFilterBuiltins
struct AdjustableImage: View {
let uiImage: UIImage
@State non-public var topLeading: CGPoint = .zero
@State non-public var topTrailing: CGPoint = .zero
@State non-public var bottomLeading: CGPoint = .zero
@State non-public var bottomTrailing: CGPoint = .zero
@State non-public var processedImage: UIImage?
@State non-public var lastSize: CGSize = .zero
var physique: some View {
GeometryReader { geometry in
ZStack {
if let processedImage = processedImage {
Picture(uiImage: processedImage)
.resizable()
.body(width: geometry.measurement.width, top: geometry.measurement.top)
.clipped()
} else {
Colour.clear
}
QuadrilateralShape(
topLeading: topLeading,
topTrailing: topTrailing,
bottomLeading: bottomLeading,
bottomTrailing: bottomTrailing
)
.stroke(Colour.pink, lineWidth: 2)
DraggablePoint(place: $topLeading, geometry: geometry)
DraggablePoint(place: $topTrailing, geometry: geometry)
DraggablePoint(place: $bottomLeading, geometry: geometry)
DraggablePoint(place: $bottomTrailing, geometry: geometry)
}
.onAppear {
updatePoints(for: geometry.measurement)
processImage(measurement: geometry.measurement)
}
.onChange(of: geometry.measurement) { newSize in
updatePoints(for: newSize)
processImage(measurement: newSize)
}
.onChange(of: topLeading) { _ in processImage(measurement: geometry.measurement) }
.onChange(of: topTrailing) { _ in processImage(measurement: geometry.measurement) }
.onChange(of: bottomLeading) { _ in processImage(measurement: geometry.measurement) }
.onChange(of: bottomTrailing) { _ in processImage(measurement: geometry.measurement) }
}
}
non-public func updatePoints(for measurement: CGSize) {
guard measurement != lastSize else { return }
lastSize = measurement
topLeading = CGPoint(x: 0, y: 0)
topTrailing = CGPoint(x: measurement.width, y: 0)
bottomLeading = CGPoint(x: 0, y: measurement.top)
bottomTrailing = CGPoint(x: measurement.width, y: measurement.top)
}
non-public func processImage(measurement: CGSize) {
guard measurement != .zero else { return }
let inputImage = CIImage(picture: uiImage)
guard let inputImage = inputImage else { return }
let imageSize = uiImage.measurement
let scaleX = imageSize.width / measurement.width
let scaleY = imageSize.top / measurement.top
let transformedPoints = [
convertPoint(topLeading, scaleX: scaleX, scaleY: scaleY, viewHeight: size.height),
convertPoint(topTrailing, scaleX: scaleX, scaleY: scaleY, viewHeight: size.height),
convertPoint(bottomLeading, scaleX: scaleX, scaleY: scaleY, viewHeight: size.height),
convertPoint(bottomTrailing, scaleX: scaleX, scaleY: scaleY, viewHeight: size.height)
]
let filter = CIFilter.perspectiveTransform()
filter.inputImage = inputImage
filter.setValue(transformedPoints[0], forKey: "inputTopLeft")
filter.setValue(transformedPoints[1], forKey: "inputTopRight")
filter.setValue(transformedPoints[2], forKey: "inputBottomLeft")
filter.setValue(transformedPoints[3], forKey: "inputBottomRight")
guard let outputImage = filter.outputImage else { return }
let context = CIContext()
guard let cgImage = context.createCGImage(outputImage, from: outputImage.extent) else { return }
processedImage = UIImage(cgImage: cgImage)
}
non-public func convertPoint(_ level: CGPoint, scaleX: CGFloat, scaleY: CGFloat, viewHeight: CGFloat) -> CIVector {
let x = level.x * scaleX
let y = (viewHeight - level.y) * scaleY
return CIVector(x: x, y: y)
}
}
struct DraggablePoint: View {
@Binding var place: CGPoint
var geometry: GeometryProxy
var physique: some View {
Circle()
.fill(Colour.blue)
.body(width: 40, top: 40)
.contentShape(Circle())
.place(place)
.gesture(
DragGesture()
.onChanged { worth in
var newLocation = worth.location
newLocation.x = min(newLocation.x, geometry.measurement.width)
newLocation.y = min(newLocation.y, geometry.measurement.top)
place = newLocation
}
)
}
}
struct QuadrilateralShape: Form {
var topLeading: CGPoint
var topTrailing: CGPoint
var bottomLeading: CGPoint
var bottomTrailing: CGPoint
func path(in rect: CGRect) -> Path {
var path = Path()
path.transfer(to: topLeading)
path.addLine(to: topTrailing)
path.addLine(to: bottomTrailing)
path.addLine(to: bottomLeading)
path.closeSubpath()
return path
}
}
What I Need
- Because the person drags the factors, the picture ought to stretch precisely inside the brand new quadrilateral.
- The whole picture ought to at all times stay contained in the quadrilateral and never lengthen outdoors or get clipped.
- If a degree is moved distant, the picture ought to scale/stretch dynamically as an alternative of simply shifting perspective.
I’ve thought of:
- Utilizing Metallic for a customized vertex shader to deal with picture distortion higher.
- Utilizing a extra superior Core Picture filter that permits for non-linear distortions as an alternative of only a perspective remodel.
- Manually mapping texture coordinates in a customized SwiftUI Canvas to redraw the picture dynamically.
Picture