Preface
This article mainly tells the story of SpringBoot integrating Mybatis, Druid and PageHelper and implementing multiple data sources and pagination. Among them, SpringBoot integrates Mybatis, which has been described in a previous article, so I won’t explain it too much here. The focus is on how to configure Druid and PageHelper in multiple data sources.
Druid introduction and use
Before using Druid, let’s take a brief look at Druid.
Druid is a database connection pool. Druid can be said to be the best database connection pool at present! It is highly favored by developers for its excellent features, performance and scalability.
Druid has deployed more than 600 applications on Alibaba, and has gone through the rigorous test of large-scale deployment in production environments for more than a year. Druid is a database connection pool developed by Alibaba called monitoring!
At the same time, Druid is not just a database connection pool, the core of Druid mainly includes three parts:
The main functions of Druid are as follows:
I won’t talk about the introduction, please refer to the official documentation for details.
Then let’s start introducing how to use Druid.
First of all, the Maven dependency, just add the druid jar.
<dependency> <groupId>com.alibaba</groupId> <artifactId>druid</artifactId> <version>1.1.8</version> </dependency>
In terms of configuration, the main thing is to add it in application.properties or application.yml as follows.
Note: Because I use two data sources here, it is just a little different. The instructions for Druid configuration are already detailed below, so I will not explain it here.
## Default data source master.datasource.url=jdbc:mysql://localhost:3306/springBoot?useUnicode=true&characterEncoding=utf8&allowMultiQueries=truemaster.datasource.username=rootmaster.datasource.password=123456master.datasource.driverClassName=com.mysql.jdbc.Driver## Another data source cluster.datasource.url=jdbc:mysql://localhost:3306/springBoot_test?useUnicode=true&characterEncoding=utf8cluster.datasource.username=rootcluster.datasource.password=123456cluster.datasource.driverClassName=com.mysql.jdbc.Driver# Configuration information for connection pool# Initialize size, minimum, maximum spring.datasource.type=com.alibaba.druid.pool.DruidDataSourcespring.datasource.initialSize=5 spring.datasource.minIdle=5 spring.datasource.maxActive=20 # Configure the time to get the connection waiting timeout spring.datasource.maxWait=60000 # Configure how long it takes to perform a detection interval to detect idle connections that need to be closed, in milliseconds spring.datasource.timeBetweenEvictionRunsMillis=60000 # Configure the minimum time to survive in the pool, in milliseconds spring.datasource.minEvictableIdleTimeMillis=300000 spring.datasource.validationQuery=SELECT 1 FROM DUAL spring.datasource.testWhileIdle=true spring.datasource.testOnBorrow=false spring.datasource.testOnReturn=false # Open PSCache and specify the size of PSCache on each connection spring.datasource.poolPreparedStatements=true spring.datasource.maxPoolPreparedStatementPerConnectionSize=20 # Configure filters for monitoring statistics intercepted. After removing it, the monitoring interface SQL cannot be counted. 'wall' is used for firewall spring.datasource.filters=stat,wall,log4j # Open mergeSql function through the connectProperties property; slow SQL records spring.datasource.connectionProperties=druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000
After successfully adding the configuration file, let's write Druid-related classes.
First, the MasterDataSourceConfig.java class, which is the default data source configuration class.
@Configuration@MapperScan(basePackages = MasterDataSourceConfig.PACKAGE, sqlSessionFactoryRef = "masterSqlSessionFactory")public class MasterDataSourceConfig { static final String PACKAGE = "com.pancm.dao.master"; static final String MAPPER_LOCATION = "classpath:mapper/master/*.xml"; @Value("${master.datasource.url}") private String url; @Value("${master.datasource.username}") private String username; @Value("${master.datasource.password}") private String password; @Value("${master.datasource.driverClassName}") private String driverClassName; @Value("${spring.datasource.initialSize}") private int initialSize; @Value("${spring.datasource.minIdle}") private int minIdle; @Value("${spring.datasource.maxActive}") private int maxActive; @Value("${spring.datasource.maxWait}") private int maxWait; @Value("${spring.datasource.timeBetweenEvictionRunsMillis}") private int timeBetweenEvictionRunsMillis; @Value("${spring.datasource.minEvictableIdleTimeMillis}") private int minEvictableIdleTimeMillis; @Value("${spring.datasource.validationQuery}") private String validationQuery; @Value("${spring.datasource.testWhileIdle}") private boolean testWhileIdle; @Value("${spring.datasource.testOnBorrow}") private boolean testOnBorrow; @Value("${spring.datasource.testOnReturn}") private boolean testOnReturn; @Value("${spring.datasource.poolPreparedStatements}") private boolean poolPreparedStatements; @Value("${spring.datasource.maxPoolPreparedStatementPerConnectionSize}") private int maxPoolPreparedStatementPerConnectionSize; @Value("${spring.datasource.filters}") private String filters; @Value("{spring.datasource.connectionProperties}") private String connectionProperties; @Bean(name = "masterDataSource") @Primary public DataSource masterDataSource() { DruidDataSource dataSource = new DruidDataSource(); dataSource.setUrl(url); dataSource.setUsername(username); dataSource.setPassword(password); dataSource.setDriverClassName(driverClassName); //Specific configuration dataSource.setInitialSize(initialSize); dataSource.setMinIdle(minIdle); dataSource.setMaxActive(maxActive); dataSource.setMaxWait(maxWait); dataSource.setTimeBetweenEvictionRunsMillis(timeBetweenEvictionRunsMillis); dataSource.setMinEvictableIdleTimeMillis(minEvictableIdleTimeMillis); dataSource.setValidationQuery(validationQuery); dataSource.setTestWhileIdle(testWhileIdle); dataSource.setTestOnBorrow(testOnBorrow); dataSource.setTestOnReturn(testOnReturn); dataSource.setPoolPreparedStatements(poolPreparedStatements); dataSource.setMaxPoolPreparedStatementPerConnectionSize(maxPoolPreparedStatementPerConnectionSize); try { dataSource.setFilters(filters); } catch (SQLException e) { e.printStackTrace(); } dataSource.setConnectionProperties(connectionProperties); return dataSource; } @Bean(name = "masterTransactionManager") @Primary public DataSourceTransactionManager masterTransactionManager() { return new DataSourceTransactionManager(masterDataSource()); } @Bean(name = "masterSqlSessionFactory") @Primary public SqlSessionFactory masterSqlSessionFactory(@Qualifier("masterDataSource") DataSource masterDataSource) throws Exception { final SqlSessionFactoryBean sessionFactory = new SqlSessionFactoryBean(); sessionFactory.setDataSource(masterDataSource); sessionFactory.setMapperLocations(new PathMatchingResourcePatternResolver() .getResources(MasterDataSourceConfig.MAPPER_LOCATION)); return sessionFactory.getObject(); }}These two annotations are explained below:
**@Primary**: Logo this bean If there are multiple similar bean candidates, the bean
Priority is considered. When configuring multiple data sources, be careful that there must be a primary data source and use @Primary to mark the bean.
**@MapperScan**: Scan the Mapper interface and container management.
It should be noted that sqlSessionFactoryRef represents defining a unique instance of SqlSessionFactory.
After the above configuration is completed, Druid can be used as a connection pool. However, Druid is not simply a connection pool. It can also be said to be a monitoring application. It comes with a web monitoring interface, which can clearly see SQL-related information.
Using Druid's monitoring function in SpringBoot, you only need to write StatViewServlet and WebStatFilter classes to implement registration services and filtering rules. Here we can write these two together, using **@Configuration** and **@Bean**.
For the sake of easy understanding, the relevant configuration instructions are also written in the code, so I won’t go into details here.
The code is as follows:
@Configurationpublic class DruidConfiguration { @Bean public ServletRegistrationBean druidStatViewServle() { //Register service ServletRegistrationBean servletRegistrationBean = new ServletRegistrationBean( new StatViewServlet(), "/druid/*"); // Whitelist (represents empty, all can be accessed, separated by commas for multiple IPs) servletRegistrationBean.addInitParameter("allow", "127.0.0.1"); // IP blacklist (dony takes precedence over allow when there is a common existence) servletRegistrationBean.addInitParameter("deny", "127.0.0.2"); // Set the login username and password servletRegistrationBean.addInitParameter("loginUsername", "pancm"); servletRegistrationBean.addInitParameter("loginPassword", "123456"); // Whether it is possible to reset data. servletRegistrationBean.addInitParameter("resetEnable", "false"); return servletRegistrationBean; } @Bean public FilterRegistrationBean druidStatFilter() { FilterRegistrationBean filterRegistrationBean = new FilterRegistrationBean( new WebStatFilter()); // Add filtering rules filterRegistrationBean.addUrlPatterns("/*"); // Add format information that does not need to be ignored filterRegistrationBean.addInitParameter("exclusions", "*.js,*.gif,*.jpg,*.png,*.css,*.ico,/druid/*"); System.out.println("druid initialization successfully!"); return filterRegistrationBean; }}After writing, start the program, enter: http://127.0.0.1:8084/druid/index.html in the browser, and then enter the set username and password to access the web interface.
Multi-data source configuration
Before performing multi-data source configuration, execute the following scripts in the mysql databases of springBoot and springBoot_test respectively.
-- Script of springBoot library CREATE TABLE `t_user` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increment id', `name` varchar(10) DEFAULT NULL COMMENT 'name', `age` int(2) DEFAULT NULL COMMENT 'Age', PRIMARY KEY (`id`)) ENGINE=InnoDB AUTO_INCREMENT=15 DEFAULT CHARSET=utf8-- Script of springBoot_test library CREATE TABLE `t_student` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(16) DEFAULT NULL, `age` int(11) DEFAULT NULL, PRIMARY KEY (`id`)) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8
Note: In order to be lazy, the structure of the two tables is made the same! But it won't affect the test!
The information about these two data sources has been configured in application.properties, and the configuration has been posted on it once, so I will not post it here.
Here we will focus on the configuration of the second data source. It is similar to the MasterDataSourceConfig.java above, the difference is that it is different from the **@Primary** annotation and name without using the **@Primary** annotation. It should be noted that MasterDataSourceConfig.java scans package and mapper accurately to the directory, and the same is true for the second data source here. Then the code is as follows:
@Configuration@MapperScan(basePackages = ClusterDataSourceConfig.PACKAGE, sqlSessionFactoryRef = "clusterSqlSessionFactory")public class ClusterDataSourceConfig { static final String PACKAGE = "com.pancm.dao.cluster"; static final String MAPPER_LOCATION = "classpath:mapper/cluster/*.xml"; @Value("${cluster.datasource.url}") private String url; @Value("${cluster.datasource.username}") private String username; @Value("${cluster.datasource.password}") private String password; @Value("${cluster.datasource.driverClassName}") private String driverClass; // Like MasterDataSourceConfig, here @Bean(name = "clusterDataSource") public DataSource clusterDataSource() { DruidDataSource dataSource = new DruidDataSource(); dataSource.setUrl(url); dataSource.setUsername(username); dataSource.setPassword(password); dataSource.setDriverClassName(driverClass); // Like MasterDataSourceConfig, here... return dataSource; } @Bean(name = "clusterTransactionManager") public DataSourceTransactionManager clusterTransactionManager() { return new DataSourceTransactionManager(clusterDataSource()); } @Bean(name = "clusterSqlSessionFactory") public SqlSessionFactory clusterSqlSessionFactory(@Qualifier("clusterDataSource") DataSource clusterDataSource) throws Exception { final SqlSessionFactoryBean sessionFactory = new SqlSessionFactoryBean(); sessionFactory.setDataSource(clusterDataSource); sessionFactory.setMapperLocations(new PathMatchingResourcePatternResolver().getResources(ClusterDataSourceConfig.MAPPER_LOCATION)); return sessionFactory.getObject(); }} After successfully writing the configuration, start the program and conduct tests.
Use interfaces to add data in springBoot and springBoot_test libraries, respectively.
t_user
POST http://localhost:8084/api/user{"name":"Zhang San","age":25}{"name":"Li Si","age":25}{"name":"Wang Wu","age":25}t_student
POST http://localhost:8084/api/student{"name":"Student A","age":16}{"name":"Student B","age":17}{"name":"Student C","age":18}After successfully adding data, call different interfaces for querying.
ask:
GET http://localhost:8084/api/user?name=Li Si
return:
{ "id": 2, "name": "Li Si", "age": 25}ask:
GET http://localhost:8084/api/student?name=Student C
return:
{ "id": 1, "name": "Student C", "age": 16}From the data, we can see that multiple data sources have been successfully configured.
PageHelper pagination implementation
PageHelper is a paging plugin for Mybatis, which is very useful! Highly recommended here! ! !
PageHelper is very simple to use, you just need to add the dependency of pagehelper in Maven.
Maven's dependencies are as follows:
<dependency> <groupId>com.github.pagehelper</groupId> <artifactId>pagehelper-spring-boot-starter</artifactId> <version>1.2.3</version> </dependency>
Note: I use springBoot version here! Other versions can also be used.
After adding dependencies, you only need to add the following configuration or code.
The first type is added in application.properties or application.yml
pagehelper: helperDialect: mysql offsetAsPageNum: true rowBoundsWithCount: true reasonable: false
The second type is added in mybatis.xml configuration
<bean id="sqlSessionFactory"> <property name="dataSource" ref="dataSource" /> <!-- Scan the mapping.xml file--> <property name="mapperLocations" value="classpath:mapper/*.xml"></property> <!-- Configuring the paging plugin--> <property name="plugins"> <array> <bean> <property name="properties"> <value> helperDialect=mysql offsetAsPageNum=true rowBoundsWithCount=true reasonable=false </value> </property> </bean> </array> </property> </bean>
The third type is added in the code and initializes it when starting the program using the **@Bean** annotation.
@Bean public PageHelper pageHelper(){ PageHelper pageHelper = new PageHelper(); Properties properties = new Properties(); //Database properties.setProperty("helperDialect", "mysql"); //Whether to use the parameter offset as PageNum properties.setProperty("offsetAsPageNum", "true"); //Whether to query the count properties.setProperty("rowBoundsWithCount", "true"); //Whether to rationalize pagination properties.setProperty("reasonable", "false"); pageHelper.setProperties(properties); }Because we are using multiple data sources here, the configuration here is slightly different. We need to configure it in sessionFactory. Here we make corresponding modifications to MasterDataSourceConfig.java. In the masterSqlSessionFactory method, add the following code.
@Bean(name = "masterSqlSessionFactory") @Primary public SqlSessionFactory masterSqlSessionFactory(@Qualifier("masterDataSource") DataSource masterDataSource) throws Exception { final SqlSessionFactoryBean sessionFactory = new SqlSessionFactoryBean(); sessionFactory.setDataSource(masterDataSource); sessionFactory.setMapperLocations(new PathMatchingResourcePatternResolver() .getResources(MasterDataSourceConfig.MAPPER_LOCATION)); //Pagination plug-in Interceptor interceptor = new PageInterceptor(); Properties properties = new Properties(); //Database properties.setProperty("helperDialect", "mysql"); //Whether to use the parameter offset as PageNum properties.setProperty("offsetAsPageNum", "true"); //Whether to query the count properties.setProperty("rowBoundsWithCount", "true"); //Whether to rationalize pagination properties.setProperty("reasonable", "false"); interceptor.setProperties(properties); sessionFactory.setPlugins(new Interceptor[] {interceptor}); return sessionFactory.getObject(); }Note: When other data sources also want to paging, please refer to the above code.
What you need to note here is the reasonable parameter, which means paging rationalization, and the default value is false. If this parameter is set to true, the first page will be queryed when pageNum<=0, and pageNum>pages (when the total number exceeds), the last page will be queryed. When false by default, query is directly based on the parameters.
After setting PageHelper, if you use it, you only need to add PageHelper.startPage(pageNum,pageSize); in front of the query sql. If you want to know the total number, buy it after the query sql statement and add page.getTotal() .
Code example:
public List<T> findByListEntity(T entity) { List<T> list = null; try { Page<?> page =PageHelper.startPage(1,2); System.out.println(getClassName(entity)+"Set two data on the first page!"); list = getMapper().findByListEntity(entity); System.out.println("There are a total of:"+page.getTotal()+" data, and the actual return is:"+list.size()+"two data!"); } catch (Exception e) { logger.error("query"+getClassName(entity)+"failed! The reason is:",e); } return list; }After the code is written, the final test begins.
Query all data in the t_user table and paginate it.
ask:
GET http://localhost:8084/api/user
return:
[ { "id": 1, "name": "Zhang San", "age": 25 }, { "id": 2, "name": "Li Si", "age": 25 }]Console Print:
Start querying...
User sets two pieces of data on the first page!
2018-04-27 19:55:50.769 DEBUG 6152 --- [io-8084-exec-10] cpdmUserDao.findByListEntity_COUNT : ==> Preparing: SELECT count(0) FROM t_user WHERE 1 = 1
2018-04-27 19:55:50.770 DEBUG 6152 --- [io-8084-exec-10] cpdmUserDao.findByListEntity_COUNT : ==> Parameters:
2018-04-27 19:55:50.771 DEBUG 6152 --- [io-8084-exec-10] cpdmUserDao.findByListEntity_COUNT : <== Total: 1
2018-04-27 19:55:50.772 DEBUG 6152 --- [io-8084-exec-10] cpdao.master.UserDao.findByListEntity : ==> Preparing: select id, name, age from t_user where 1=1 LIMIT ?
2018-04-27 19:55:50.773 DEBUG 6152 --- [io-8084-exec-10] cpdao.master.UserDao.findByListEntity : ==> Parameters: 2(Integer)
2018-04-27 19:55:50.774 DEBUG 6152 --- [io-8084-exec-10] cpdao.master.UserDao.findByListEntity : <== Total: 2
There are a total of: 3 pieces of data, and the actual return is: 2 two pieces of data!
Query all data in the t_student table and paginate it.
ask:
GET http://localhost:8084/api/student
return:
[ { "id": 1, "name": "Student A", "age": 16 }, { "id": 2, "name": "Student B", "age": 17 }]Console Print:
Start querying...
Studnet sets two pieces of data on the first page!
2018-04-27 19:54:56.155 DEBUG 6152 --- [nio-8084-exec-8] cpdcSfindByListEntity_COUNT : ==> Preparing: SELECT count(0) FROM t_student WHERE 1 = 1
2018-04-27 19:54:56.155 DEBUG 6152 --- [nio-8084-exec-8] cpdcSfindByListEntity_COUNT : ==> Parameters:
2018-04-27 19:54:56.156 DEBUG 6152 --- [nio-8084-exec-8] cpdcSfindByListEntity_COUNT : <== Total: 1
2018-04-27 19:54:56.157 DEBUG 6152 --- [nio-8084-exec-8] cpdcStudentDao.findByListEntity : ==> Preparing: select id, name, age from t_student where 1=1 LIMIT ?
2018-04-27 19:54:56.157 DEBUG 6152 --- [nio-8084-exec-8] cpdcStudentDao.findByListEntity : ==> Parameters: 2(Integer)
2018-04-27 19:54:56.157 DEBUG 6152 --- [nio-8084-exec-8] cpdcStudentDao.findByListEntity : <== Total: 2
There are a total of: 3 pieces of data, and the actual return is: 2 two pieces of data!
After the query is completed, let’s take a look at Druid’s monitoring interface. Enter in the browser: http://127.0.0.1:8084/druid/index.html
The operation records can be clearly seen!
If you want to know more about Druid, you can check the official documentation!
Conclusion
This article was finally finished. When writing the code, I encountered many problems, and then I slowly tried and found information to solve it. This article only introduces these related uses very briefly, and may be more complicated in actual applications.
Reference article: https://www.bysocket.com/?p=1712
Durid official address: https://github.com/alibaba/druid
PageHelper official address: https://github.com/pagehelper/Mybatis-PageHelper
I put the project on github: https://github.com/xuwujing/springBoot, you can also download it locally: Click here
Summarize
The above is the entire content of this article. I hope that the content of this article has certain reference value for everyone's study or work. If you have any questions, you can leave a message to communicate. Thank you for your support to Wulin.com.