Functional requirements are that the company needs to build a large operating platform:
1. The operation platform has its own database, maintaining basic functions such as users, roles, menus, parts and permissions.
2. The operation platform also needs to provide back-end operations of other different services (service A, service B), and the databases of service A and service B are independent.
Therefore, the operation platform must connect at least three libraries: operation library, A library, and B library, and hope to automatically switch to the corresponding data source for each function request (my final implementation is to switch to the method level of the Service, and to switch to the method of each DAO layer. The functions of our system are relatively independent of each other).
Step 1: Configure multiple data sources
1. Define the data source:
The data source I use is Alibaba's DruidDataSource (it's fine with DBCP, this is whatever). The configuration is as follows:
<!-- op dataSource --> <bean id="opDataSource" init-method="init" destroy-method="close"> <property name="url" value="${db.master.url}" /> <property name="username" value="${db.master.user}" /> <property name="password" value="${db.master.password}" /> <property name="driverClassName" value="${db.master.driver}" /> <property name="initialSize" value="5" /> <property name="maxActive" value="100" /> <property name="minIdle" value="10" /> <property name="maxWait" value="60000" /> <property name="validationQuery" value="SELECT 'x'" /> <property name="testOnBorrow" value="false" /> <property name="testOnReturn" value="false" /> <property name="testWhileIdle" value="true" /> <property name="timeBetweenEvictionRunsMillis" value="600000" /> <property name="minEvictableIdleTimeMillis" value="300000" /> <property name="removeAbandoned" value="true" /> <property name="removeAbandonedTimeout" value="1800" /> <property name="logAbandoned" value="true" /> <!-- Configure filters for monitoring statistics intercept --> <property name="filters" value="config,mergeStat,wall,log4j2" /> <property name="connectionProperties" value="config.decrypt=true" /> </bean> <!-- serverA dataSource --> <bean id="serverADataSource" init-method="init" destroy-method="close"> <property name="url" value="${db.serverA.master.url}" /> <property name="username" value="${db.serverA.master.user}" /> <property name="password" value="${db.serverA.master.password}" /> <property name="driverClassName" value="${db.serverA.master.driver}" /> <property name="initialSize" value="5" /> <property name="maxActive" value="100" /> <property name="minIdle" value="10" /> <property name="maxWait" value="60000" /> <property name="validationQuery" value="SELECT 'x'" /> <property name="testOnBorrow" value="false" /> <property name="testOnReturn" value="false" /> <property name="testOnReturn" value="false" /> <property name="testOnReturn" value="false" /> <property name="testWhileIdle" value="true" /> <property name="timeBetweenEvictionRunsMillis" value="600000" /> <property name="minEvictableIdleTimeMillis" value="300000" /> <property name="removeAbandoned" value="true" /> <property name="removeAbandonedTimeout" value="1800" /> <property name="logAbandoned" value="true" /> <!-- Configure filters for monitoring statistics intercepts --> <property name="filters" value="config,mergeStat,wall,log4j2" /> <property name="connectionProperties" value="config.decrypt=true" /> </bean> <!-- serverB dataSource --> <bean id="serverBDataSource" init-method="init" destroy-method="close"> <property name="url" value="${db.serverB.master.url}" /> <property name="username" value="${db.serverB.master.user}" /> <property name="password" value="${db.serverB.master.password}" /> <property name="driverClassName" value="${db.serverB.master.driver}" /> <property name="initialSize" value="5" /> <property name="maxActive" value="100" /> <property name="minIdle" value="10" /> <property name="maxWait" value="60000" /> <property name="validationQuery" value="SELECT 'x'" /> <property name="testOnBorrow" value="false" /> <property name="testOnReturn" value="false" /> <property name="testWhileIdle" value="true" /> <property name="timeBetweenEvictionRunsMillis" value="600000" /> <property name="minEvictableIdleTimeMillis" value="300000" /> <property name="removeAbandoned" value="true" /> <property name="removeAbandonedTimeout" value="1800" /> <property name="logAbandoned" value="true" /> <!-- Configure filters for monitoring statistics intercept --> <property name="filters" value="config,mergeStat,wall,log4j2" /> <property name="connectionProperties" value="config.decrypt=true" /> </bean>I configured three data sources: oPDataSource (the data source of the operating platform itself), serverADataSource, and serverBDataSource.
2. Configure multipleDataSource
multipleDataSource is equivalent to one proxy for the above three data sources. When it is truly combined with Spring/Mybatis, multipleDataSource and separately configured DataSource use are not different:
<!-- Spring Integration Mybatis:Configure multipleDatasource --> <bean id="sqlSessionFactory" > <property name="dataSource" ref="multipleDataSource" /> <!-- Automatically scan Mapping.xml file --> <property name="mapperLocations"> <list> <value>classpath*:/sqlMapperXml/*.xml</value> <value>classpath*:/sqlMapperXml/*/*.xml</value> </list> </property> <property name="configLocation" value="classpath:xml/mybatis-config.xml"></property> <property name="typeAliasesPackage" value="com.XXX.platform.model" /> <property name="globalConfig" ref="globalConfig" /> <property name="plugins"> <array> <!-- Pagination plugin configuration--> <bean id="paginationInterceptor" > <property name="dialectType" value="mysql" /> <property name="optimizeType" value="aliDruid" /> </bean> </array> </property> </bean> <!-- MyBatis dynamic implementation --> <bean id="mapperScannerConfigurer"> <!-- For Dao interface dynamic implementation, you need to know where the interface is --> <property name="basePackage" value="com.XXX.platform.mapper" /> <property name="sqlSessionFactoryBeanName" value="sqlSessionFactory"></property> </bean> <!-- MP global configuration --> <bean id="globalConfig"> <property name="idType" value="0" /> <property name="dbColumnUnderline" value="true" /> </bean> <!-- Transaction Management Configuration multipleDataSource --> <bean id="transactionManager" > <property name="dataSource" ref="multipleDataSource"></property> </bean>
After understanding the location of multipleDataSource, let’s focus on how to implement multipleDataSource. The configuration file is as follows:
<bean id="multipleDataSource"> <property name="defaultTargetDataSource" ref="opDataSource" /> <property name="targetDataSources"> <map> <entry key="opDataSource" value-ref="opDataSource" /> <entry key="serverADataSource" value-ref="serverADataSource" /> <entry key="serverBDataSource" value-ref="serverBDataSource" /> </map> </property> </bean>
The implemented Java code is as follows, and there is no need for too much explanation, and it is very clear at a glance:
import org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource;/** * * @ClassName: MultipleDataSource * @Description: Configure multiple data sources<br> * @author: yuzhu.peng * @date: January 12, 2018 at 4:37:25 pm */public class MultipleDataSource extends AbstractRoutingDataSource { private static final ThreadLocal<String> dataSourceKey = new InheritableThreadLocal<String>(); public static void setDataSourceKey(String dataSource) { dataSourceKey.set(dataSource); } @Override protected Object determineCurrentLookupKey() { return dataSourceKey.get(); } public static void removeDataSourceKey() { dataSourceKey.remove(); }}Inherited from Spring's AbstractRoutingDataSource, implements the abstract method determineCurrentLookupKey. This method will determine the data source Datasource for this connection every time the database connection is obtained. You can see the Spring code to be clear:
/*Get connection*/ public Connection getConnection() throws SQLException { return determineTargetDataSource().getConnection(); } protected DataSource determineTargetDataSource() { Assert.notNull(this.resolvedDataSources, "DataSource router not initialized"); /*The determineCurrentLookupKey here is an abstract interface, obtaining the specific data source name*/ Object lookupKey = determineCurrentLookupKey(); DataSource dataSource = (DataSource)this.resolvedDataSources.get(lookupKey); if ((dataSource == null) && (((this.lenientFallback) || (lookupKey == null)))) { dataSource = this.resolvedDefaultDataSource; } if (dataSource == null) { throw new IllegalStateException("Cannot determine target DataSource for lookup key [" + lookupKey + "]"); } return dataSource; } /*Abstract interface: that is, the interface implemented by our multipleDataSource*/ protected abstract Object determineCurrentLookupKey();Step 2: Dynamically switch the data source every request (Service method level)
The implementation idea is to use Spring's AOP idea to intercept each Service method call, and then dynamically switch the key of the data in multipleDataSource according to the overall path name of the method. Our project, for the operations of different services, that is, different databases, is independent of each other. It is not recommended to call different data sources in the same service method. In this way, we need to dynamically determine whether the frequency of switching needs to be placed at the DAO level, that is, the SQL level. In addition, transaction management is not convenient.
Let's look at the AOP implementation of dynamic switching data sources:
import java.lang.reflect.Proxy;import org.apache.commons.lang.ClassUtils;import org.aspectj.lang.JoinPoint;import org.aspectj.lang.annotation.Aspect;import org.aspectj.lang.annotation.Before;import org.springframework.core.annotation.Order;/** * Data source switching AOP * * @author yuzhu.peng * @since 2018-01-15 */@Aspect@Order(1)public class MultipleDataSourceInterceptor { /** * The interceptor pays special attention to switching data sources before requesting all business implementation classes. Since multiple data sources are used, it is best to call Mapper only in *ServiceImpl. Otherwise, when calling a table that is not a default data source, an exception that does not exist in the table will be reported* * @param joinPoint * @throws Throwable */ @Before("execution(* com.xxxx.platform.service..*.*ServiceImpl.*(..))") public void setDataSoruce(JoinPoint joinPoint) throws Throwable { Class<?> clazz = joinPoint.getTarget().getClass(); String className = clazz.getName(); if (ClassUtils.isAssignable(clazz, Proxy.class)) { className = joinPoint.getSignature().getDeclaringTypeName(); } // Set the serverA data source with the class name, otherwise the default is the data source in the background if (className.contains(".serverA.")) { MultipleDataSource.setDataSourceKey(DBConstant.DATA_SOURCE_serverA); } else if (className.contains(".serverB.")) { MultipleDataSource.setDataSourceKey(DBConstant.DATA_SOURCE_serverB); } else { MultipleDataSource.setDataSourceKey(DBConstant.DATA_SOURCE_OP); } } /** * When the operation is completed, if the current data source is released, if it is not released, a data source conflict will occur when clicking frequently. It is a table of another data source, but it will run to another data source. The report does not exist* * @param joinPoint * @throws Throwable */ @After("execution(* com.xxxx.service..*.*ServiceImpl.*(..))") public void removeDataSoruce(JoinPoint joinPoint) throws Throwable { MultipleDataSource.removeDataSourceKey(); }}Intercept all ServiceImpl methods, judge which data source function belongs to the fully qualified name of the method, and then select the corresponding data source. After the distribution is completed, release the current data source. Note that I used Spring's @Order, annotation, and I will talk about it next, when defining multiple AOPs, order is very useful.
other:
At the beginning, the project did not introduce transactions, so everything was OK. You can access the correct data source every time. After joining the transaction management of SPring, you cannot dynamically switch the data source (it seems that the transaction is not effective, but the two are not valid at the same time). Later, I found that the reason was the execution order of AOP, so I used the SPring Order mentioned above:
The smaller the order, the execution is first. At this point, you can not only switch data sources dynamically, but also successfully use transactions (in the same data source).
The above is all the content of this article. I hope it will be helpful to everyone's learning and I hope everyone will support Wulin.com more.