Recently, sliding verification codes have gradually become popular on many websites. On the one hand, it is relatively novel for user experience and simple to operate. On the other hand, the security has not been greatly reduced compared to the graphic verification codes. Of course, so far, there is no absolute security verification, but it is just increasing the cost of bypass for attackers.
Next, analyze the core process of sliding verification code:
Randomly generates cutouts and background images with cutout shadows on the backend, and saves random cutout position coordinates in the background.
The front-end realizes sliding interaction, puts the cutout on the cutout shadow, and obtains the user's sliding distance value, such as the following example
The front-end passes the user sliding distance value into the back-end, and the back-end checks whether the error is within the allowable range.
Here, simply verifying the user's sliding distance is the most basic verification. For higher security reasons, the entire trajectory of the user's sliding, the user's access behavior on the current page, etc. may also be considered. These can be complex, and even with the help of user behavior data analysis models, the ultimate goal is to increase the difficulty of illegal simulation and bypassing. These are the opportunities that can be summarized and summarized commonly used methods. This article focuses on how to generate sliding verification codes step by step based on Java.
It can be seen that the sliding graphic verification code consists of two important pictures, the block cutting and the original picture with the block cutting shadow. There are two important features here to ensure the difficulty of being brute-forced: the shape of the block cutting and the position of the original picture where the block is located is random. This allows random, irregularly found pairings of cutouts and original images to be created in a limited set of images.
How to use code to extract a small picture with a specific random shape from a large picture?
The first step is to determine the outline of the cutout image to facilitate the actual execution of the image processing operation in the future.
The picture is composed of pixels, and each pixel point corresponds to a color. The color can be represented in RGB form, plus a transparency, and understand a picture as a plane figure, with the upper left corner as the origin, to the right x-axis, and to the down y-axis, a coordinate value corresponds to the color of the pixel point at the corresponding position, so that a picture can be converted into a two-dimensional array. Based on this consideration, the outline is also represented by a two-dimensional array, with the element value inside the outline being 1, and the element value outside the outline being 0.
At this time, you have to think about how to generate this outline shape. There are coordinate systems, rectangles, and circles. Yes, mathematical graphical functions are used. Typically, functions using a circle's function equation and a rectangle's side line function, similar to:
In (xa)²+(yb)²=r², there are three parameters a, b, and r, that is, the center coordinate is (a, b) and the radius r. These cutouts are placed on the coordinate system described above and it is easy to calculate the specific values from the graph.
The sample code is as follows:
private int[][] getBlockData() { int[][] data = new int[targetLength][targetWidth]; double x2 = targetLength-circleR-2; //The position of the randomly generated circle double h1 = circleR + Math.random() * (targetWidth-3*circleR-r1); double po = circleR*circleR; double xbegin = targetLength-circleR-r1; double ybegin = targetWidth-circleR-r1; for (int i = 0; i < targetLength; i++) { for (int j = 0; j < targetWidth; j++) { //Right ○ double d3 = Math.pow(i - x2,2) + Math.pow(j - h1,2); if (d1 <= po || (j >= ybegin && d2 >= po) || (i >= xbegin && d3 >= po) ) { data[i][j] = 0; } else { data[i][j] = 1; } } } return data; }The second step is that after this outline, you can determine the cutout based on the value of this two-dimensional array and add a shadow to the cutout position on the original image.
The operation is as follows:
private void cutByTemplate(BufferedImage oriImage,BufferedImage targetImage, int[][] templateImage, int x, int y){ for (int i = 0; i < targetLength; i++) { for (int j = 0; j < targetWidth; j++) { int rgb = templateImage[i][j]; // The color discoloration process in the corresponding position in the original image int rgb_ori = oriImage.getRGB(x + i, y + j); if (rgb == 1) { //Copy the corresponding color value on the cutout image targetImage.setRGB(i, y + j, rgb_ori); int r = (0xff & rgb_ori); int g = (0xff & (rgb_ori >> 8)); int b = (0xff & (rgb_ori >> 16))); rgb_ori = r + (g << 8) + (b << 16) + (200 << 24); //The color change of the corresponding position of the original image oriImage.setRGB(x + i, y + j, rgb_ori); } } } }After the first two steps, you will get the cutout and the original picture with cutout shadow. In order to increase confusion and improve network loading effect, further processing of images is also needed. Generally, there are two things to do. One is to blur the image and increase the difficulty of machine recognition, and the other is to perform appropriate compression of the same quality. Fuzzy processing is easy to think of Gaussian fuzziness, and the principle is easy to understand. You can go to Google to learn about it. Specifically for Java implementation, there are many versions. Now, there are no third-party jars to provide an example:
public static ConvolveOp getGaussianBlurFilter(int radius, boolean horizontal) { if (radius < 1) { throw new IllegalArgumentException("Radius must be >= 1"); } int size = radius * 2 + 1; float[] data = new float[size]; float sigma = radius / 3.0f; float twoSigmaSquare = 2.0f * sigma * sigma; float sigmaRoot = (float) Math.sqrt(twoSigmaSquare * Math.PI); float total = 0.0f; for (int i = -radius; i <= radius; i++) { float distance = i * i; int index = i + radius; data[index] = (float) Math.exp(-distance / twoSigmaSquare) / sigmaRoot; total += data[index]; } for (int i = 0; i < data.length; i++) { data[i] /= total; } Kernel kernel = null; if (horizontal) { kernel = new Kernel(size, 1, data); } else { kernel = new Kernel(1, size, data); } return new ConvolveOp(kernel, ConvolveOp.EDGE_NO_OP, null); }public static void simpleBlur(BufferedImage src,BufferedImage dest) { BufferedImageOp op = getGaussianBlurFilter(2,false); op.filter(src, dest); }The blurring effect was very good after testing. In addition, it is image compression, and no third-party tools are used to perform homogeneous compression.
public static byte[] fromBufferedImage2(BufferedImage img,String imageType) throws IOException { bos.reset(); // Get the writer Iterator<ImageWriter> iter = ImageIO.getImageWritersByFormatName(imagType); ImageWriter writer = (ImageWriter) iter.next(); // Get the output parameter settings of the specified writer (ImageWriteParam ) ImageWriteParam iwp = writer.getDefaultWriteParam(); iwp.setCompressionMode(ImageWriteParam.MODE_EXPLICIT); // Set whether to compress iwp.setCompressionQuality(1f); // Set compression quality parameters iwp.setProgressiveMode(ImageWriteParam.MODE_DISABLED); ColorModel colorModel = ColorModel.getRGBdefault(); // Specify the color mode used during compression iwp.setDestinationType(new javax.imageio.ImageTypeSpecifier(colorModel, colorModel.createCompatibleSampleModel(16, 16))); writer.setOutput(ImageIO .createImageOutputStream(bos)); IIOImage iIamge = new IIOImage(img, null, null); writer.write(null, iIamge, iwp); byte[] d = bos.toByteArray(); return d; }At this point, the code processing process of the core of the sliding verification code has been completed. There are many details that can be continuously polished and optimized, so that the sliding experience can be better. I hope it can help some students who are preparing to build their own sliding verification code.
The above code implementations are very refined. On the one hand, in order to ensure performance, and on the other hand, it is easy to understand. In addition, due to various reasons, it is not convenient to introduce too many details. If you have any questions, you can leave a message to communicate. After testing, the process response time of the generation of sliding graphics can be controlled at about 20ms. If the original image resolution is below 300px*150px, it can reach about 10ms, which is within an acceptable range. If there is a more efficient way, I hope to give me some advice. I also hope everyone will support Wulin.com more.