First, three JDBC batch insertion programming methods are introduced to compare them. The specific content is as follows
JDBC batch insertion is mainly used for data import and logging because logs are generally written in the file first.
I used the JDBC driver of Mysql 5.1.5 to test three more commonly used methods
Method 1: Use PreparedStatement to add batches
try { Class.forName("com.mysql.jdbc.Driver"); conn = DriverManager.getConnection(o_url, userName, password); conn.setAutoCommit(false); String sql = "INSERT adlogs(ip,website,yyyymmdd,hour,object_id) VALUES(?,?,?,?,?)"; PreparedStatement prest = conn.prepareStatement(sql,ResultSet.TYPE_SCROLL_SENSITIVE,ResultSet.CONCUR_READ_ONLY); for(int x = 0; x < size; x++){ prest.setString(1, "192.168.1.1"); prest.setString(2, "localhost"); prest.setString(3, "20081009"); prest.setInt(4, 8); prest.setString(5, "11111111"); prest.addBatch(); } prest.executeBatch(); conn.commit(); conn.close(); } catch (SQLException ex) { Logger.getLogger(MyLogger.class.getName()).log(Level.SEVERE, null, ex); } catch (ClassNotFoundException ex) { Logger.getLogger(MyLogger.class.getName()).log(Level.SEVERE, null, ex); } Explain the meaning of the following two parameters when building a Statement:
The first parameter specifies the type of ResultSet. The options are:
TYPE_FORWARD_ONLY: Default type. Forward access is only allowed once and will not be affected by changes made by other users to the database.
TYPE_SCROLL_INSENSITIVE: Allows forward or backward movement in the list, and even specific positioning, such as moving to the fourth record in the list or moving two records backward from the current position. It will not be affected by changes made by other users to the database.
TYPE_SCROLL_SENSITIVE: Like TYPE_SCROLL_INSENSITIVE, positioning is allowed in records. This type is affected by changes made by other users. If the user deletes a record after executing the query, that record will disappear from the ResultSet. Similarly, changes to data values will be reflected in the ResultSet.
The second parameter sets the concurrency of the ResultSet, which determines whether the ResultSet can be updated. The options are:
CONCUR_READ_ONLY: This is the default value, specified that it cannot be updated
ResultSet CONCUR_UPDATABLE: Specifies that the ResultSet can be updated
Method 2: Use Statement to add batch method
conn.setAutoCommit(false); Statement stmt = conn.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_READ_ONLY); for(int x = 0; x < size; x++){ stmt.addBatch("INSERT INTO adlogs(ip,website,yyyymmdd,hour,object_id) VALUES('192.168.1.3', 'localhost','20081009',8,'23123')"); } stmt.executeBatch(); conn.commit();Method 3: Use Statement directly
conn.setAutoCommit(false); Statement stmt = conn.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_READ_ONLY); for(int x = 0; x < size; x++){ stmt.execute("INSERT INTO adlogs(ip,website,yyyymmdd,hour,object_id) VALUES('192.168.1.3', 'localhost','20081009',8,'23123')"); } conn.commit(); The average test time for inserting 100,000 pieces of data using the above method is:
Method 1: 17.844s
Method 2: 18.421s
Method 3: 16.359s
It can be seen that the insertion of JDBC batch statements not only does not improve performance, but is slower than when no batch is used. Of course, this may be related to the implementation method of JDBC's specific drivers. The attachment is my test code, which can be used to run on my computer.
When performing batch insertion, the most important thing is to automatically cancel the submission, so it should not matter whether or not JDBC's batch syntax is used.
conn.setAutoCommit(false)
I personally think the first method is the most convenient and practical.
Example explanation of jdbc batch insertion data :
Recently, when I was working on a program to import excel data into a database, I was preparing to use jdbc batch insertion due to the large amount of data. So prepareStatement.addBatch() is used; when 1w pieces of data are added, the insert operation is performed, preparedStatement.executeBatch() is used. I thought this would be fast, but it took me more than 30 minutes to insert 65,536 pieces of data, which was completely beyond my expectations. So I asked my colleagues how they processed this kind of large-scale data import. I found that they also used jdbc batch insertion processing, but unlike me, they used con.setAutoCommit(false); then preparedStatement.executeBatch() and then executed con.commit(); so I tried again, what is a miracle? It just took half an hour to import these data, and after adding these two sentences, it took only 15 seconds to complete it. So I checked the reason and found the following explanation online:
* When importing data into InnoDB, make sure that MySQL does not have autocommit mode enabled because that
Requires a log flush to disk for every insert. To disable autocommit during your import operation, surround it with
SET autocommit and COMMIT statements:
SET autocommit=0;
... SQL import statements ...
COMMIT;
The first time, it is precisely because there is no setAutoCommit(false); for each insert statement, a log will be generated to disk. Therefore, although batch insert is set, its effect is like a single insert, resulting in a very slow insertion.
Some of the codes are as follows:
String sql = "insert into table *****";con.setAutoCommit(false);ps = con.prepareStatement(sql);for(int i=1; i<65536; i++){ ps.addBatch(); // Insert 1w records once if (i % 10000 == 0){ ps.executeBatch(); con.commit(); }}// Insert less than 1w data ps.executeBatch(); con.commit();The above is just a side dish, and then "serve" is followed by:
1. Test batch writing data
long start = System.currentTimeMillis(); DaoRecord daoRecord = new DaoRecord(); List<T> list = new ArrayList<T>(); for(int i = 1; i <= 1000; i++){ for(int j = 1; j <= 1000; j++){ T t = new T(); t.setI(i); t.setJ(j); list.add(t); } } daoRecord.InsertBatch(list); System.out.println("Time-consuming:" + (System.currentTimeMillis()-start)+"ms"); 2. Batch writing data test
public void InsertBatch(List<T> list){ String sql = "insert into t(go,back) values(?,?)"; DBHelper dbh = new DBHelper(sql); Connection conn = dbh.returnConn(); try { conn.setAutoCommit(false);//Note that this sentence must be false, see the first reference PreparedStatement ps = conn.prepareStatement(sql); for(int i = 0; i < list.size(); i++){ ps.setInt(1, list.get(i).getI()); ps.setInt(2, list.get(i).getJ()); ps.addBatch(); if (i % 10000 == 0){ ps.executeBatch(); conn.commit(); } } ps.executeBatch(); conn.commit(); conn.close(); } catch (SQLException e) { // TODO automatically generated catch block e.printStackTrace(); } }Data table:
Experimental results:
The above is all about this article, I hope it will be helpful to everyone's learning.