Generally speaking, after the Redis Client issues a request, it usually blocks and waits for the Redis server to process. After the Redis server completes processing, it will return the result to the Client through the response message after requesting the command.
It feels like this is a bit similar to HBase's Scan. Usually, the client gets each record and calls the server at once.
In Redis, is there anything similar to HBase Scanner Caching, which returns multiple records at one request?
Yes, that's Pipline. Official introduction http://redis.io/topics/pipelining
When there are large batches of operations through pipeline, we can save a lot of time wasted on network delays. It should be noted that the commands are packaged and sent using pipeline. Redis must cache the processing results of all commands before processing all commands. The more commands you pack, the more memory you consume. So it is not that the more packaged commands, the better.
When using Pipeline, performance has been greatly improved when reading and writing Redis in batches.
Java tested it:
package com.lxw1234.redis;import java.util.HashMap;import java.util.Map;import java.util.Set;import redis.clients.jedis.Jedis;import redis.clients.jedis.Pipeline;import redis.clients.jedis.Response;public class Test { public static void main(String[] args) throws Exception { Jedis redis = new Jedis("127.0.0.1", 6379, 400000); Map<String,String> data = new HashMap<String,String>(); redis.select(8); redis.flushDB(); //hmset long start = System.currentTimeMillis(); //Direct hmset for (int i=0;i<10000;i++) { data.clear(); data.put("k_" + i, "v_" + i); redis.hmset("key_" + i, data); } long end = System.currentTimeMillis(); System.out.println("dbsize:[" + redis.dbSize() + "] .. "); System.out.println("hmset without pipeline used [" + (end - start) / 1000 + "] seconds .."); redis.select(8); redis.flushDB(); //Use pipeline hmset Pipeline p = redis.pipelined(); start = System.currentTimeMillis(); for (int i=0;i<10000;i++) { data.clear(); data.put("k_" + i, "v_" + i); p.hmset("key_" + i, data); } p.sync(); end = System.currentTimeMillis(); System.out.println("dbsize:[" + redis.dbSize() + "] .. "); System.out.println("hmset with pipeline used [" + (end - start) / 1000 + "] seconds .."); //hmget Set<String> keys = redis.keys("*"); //Use Jedis hgetall start = System.currentTimeMillis(); Map<String,Map<String,String>> result = new HashMap<String,Map<String,String>>(); for(String key : keys) { result.put(key, redis.hgetAll(key)); } end = System.currentTimeMillis(); System.out.println("result size:[" + result.size() + "] .."); System.out.println("hgetAll without pipeline used [" + (end - start) / 1000 + "] seconds .."); //Use pipeline hgetall Map<String,Response<Map<String,String>>>> responses = new HashMap<String,Response<Map<String,String>>>(keys.size()); result.clear(); start = System.currentTimeMillis(); for(String key : keys) { responses.put(key, p.hgetAll(key)); } p.sync(); for(String k : responses.keySet()) { result.put(k, responses.get(k).get()); } end = System.currentTimeMillis(); System.out.println("result size:[" + result.size() + "] .."); System.out.println("hgetAll with pipeline used [" + (end - start) / 1000 + "] seconds .."); redis.disconnect(); } } The test results are as follows:
dbsize:[10000] .. hmset without pipeline used [243] seconds .. dbsize:[10000] .. hmset with pipeline used [0] seconds .. result size:[10000] .. hgetAll without pipeline used [243] seconds .. result size:[10000] .. hgetAll with pipeline used [0] seconds ..
Using pipeline to read and write 10,000 records in batches is a piece of cake and it will be done in seconds.
The above is all the content of this article. I hope it will be helpful to everyone's learning and I hope everyone will support Wulin.com more.