This article shares the implementation principle, implementation code, and problems encountered in the implementation process of canvas erasing animation for your reference. The specific content is as follows
The principle is to erase a certain image on a mobile device and display another image, using canvas to implement it.
If the user erases manually, you need to listen for touchmove, touchend and other events, calculate the corresponding coordinates, and use canvas' clearRect or rect to draw arcs or lines to achieve this. However, this will cause lag on androd phones.
canvas has a globalCompositeOperation property. The default value of this property is source-over, that is, it will be superimposed when you draw on existing pixels. However, there is another attribute that is destination-out, that is, when displaying your target image outside the source image, that is, when drawing based on existing pixels, the existing pixels in the area you draw will be set transparent. With this attribute, it means that you do not need to use a series of functions such as clip. You can just use thick lines or arcs. This will reduce the calls to the drawing environment API, improve performance, and run on Android much smoother.
Here is my erase code :
let requestAnimationFrame = window.requestAnimationFrame || window.mozRequestAnimationFrame || window.webkitRequestAnimationFrame || window.msRequestAnimationFrame;let cancelAnimationFrame = window.cancelAnimationFrame || window.mozCancelAnimationFrame;let a = 60;let canvasCleaner = document.getElementById('cas-1');let ctxCleaner = canvasCleaner.getContext('2d');let canvasCleanerBox = document.querySelector('.slide-4');let imgCleaner = new Image();canvasCleaner.width = canvasCleanerBox.clientWidth * 2;canvasCleaner.height = canvasCleanerBox.clientHeight * 2;canvasCleaner.style.width = canvasCleanerBox.clientWidth + 'px';canvasCleaner.style.height = canvasCleanerBox.clientHeight + 'px';imgCleaner.src = 'https://gw.alicdn.com/tps/TB1XbyCKVXXXXXEXpXXXXXXXXX-1080-1920.jpg';imgCleaner.onload = ()=> { let width = parseInt(canvasCleaner.style.width); w = canvasCleaner.width*(imgCleaner.height/imgCleaner.width); ctxCleaner.drawImage(imgCleaner, 0, 0, canvasCleaner.width, w ); ctxCleaner.lineCap = 'round';//lineCap property sets or returns the style of the line cap at the end of the line. ctxCleaner.lineJoin = 'round'; ctxCleaner.lineWidth = 100;//Set or return the width of the current line ctxCleaner.globalCompositeOperation = 'destination-out';}let drawline = (x1, y1,ctx)=> { ctx.save(); ctx.beginPath(); ctx.arc(x1,y1, a, 0, 2 * Math.PI); ctx.fill();//fill() method fills the current image (path). The default color is black. ctx.restore();};/* d In order to erase the coordinates of the area point, the data I obtained by simulating the shapes that need to be erased by myself is similar to the following: let d2 = [ [1,190],[30,180],[60,170],[90,168],[120,167],[150,165],[180,164],[210,163],[240,160],[270,159],[300,154],[330,153],[360,152], [390,150],[420,140],[450,130],[480,120],[510,120],[540,120],[570,120],[600,120],[630,120],[660,120],[690,120],[720,120],[1,190],[20,189],[28,186],[45,185],[50,185],[62,184],[64,182],[90,180],[120,178], [160,176],[200,174],[240,172];*/let draw = (d,ctx)=> { if(idx >= d.length) { cancelAnimationFrame(ts); }else { drawline(d[idx][0], d[idx][1],ctx); idx++; requestAnimationFrame(()=> { draw(d, ctx); }); }}Because I display the erase animation directly on the page and do not require the user to wipe it themselves, so the coordinates of the erase area are calculated by myself. Then use requestAnimationFrame to implement animation. I started using setInterval. I found that the setInterval will always be messed up afterwards, so it is recommended not to use setInterval.
In the process of achieving this effect, I found that when using canvas to drawImage on the page, the image becomes very blurry?
It turns out that it is because there is a devicePixelRatio property in the browser's window variable, which determines that the browser will use several (usually 2) pixel points to render a pixel. That is, assuming that the value of devicePixelRatio is 2, a picture with a size of 100*100 pixels will be rendered with the width of 2 pixels in the picture. Therefore, the picture will actually occupy 200*200 pixels on the retina screen, which is equivalent to the image being twice enlarged, so the picture becomes blurred.
This way the problem about canvas will be easily solved. We can treat canvas as an image. When the browser renders canvas, it will use the width of 2 pixels to render canvas, so the drawn image or text will be blurred in most retina devices.
Similarly, there is also a webkitBackingStorePixelRatio property (only safari and chrome) in the canvas context. The value of this property determines that the browser will use several pixels to store canvas information before rendering canvas. The value in safari under ios6 is 2. Therefore, if a 100*100 image is drawn in safari, the image will first generate a 200*200 image in memory, and then when the browser renders it, it will render it as a 100x100 image, so it becomes 200x200, which is exactly the same as the image in memory, so there will be no distortion problems in safari on iOS. However, there is distortion in safari in chrome and iOS7. The reason is that the webkitBackingStorePixelRatio values of safari in chrome and iOS7 are both 1.
Solution:
canvas.width = canvasBox.clientWidth * 2;canvas.height = canvasBox.clientHeight * 2;canvas.style.width = canvas.clientWidth + 'px';canvas.style.height = canvas.clientHeight * 'px';w = canvas.width*(img.height/img.width); // console.log(w); ctx.drawImage(img, 0, 0, canvas.width, w);
That is, you can create a canvas that is twice the actual size, and then use the css style to limit the canvas to the actual size. Or use this polyfill on github, but I tried it and it doesn't seem to work.
The above is all the content of this article. I hope it will be helpful to everyone's learning and I hope everyone will support Wulin.com more.