System environment: centos 7.2 under vm12
Current installed version: elasticsearch-2.4.0.tar.gz
Java operation es cluster steps 1: Configure cluster object information; 2: Create client; 3: View cluster information
1: Cluster name
The default cluster name is elasticsearch. If the cluster name is inconsistent with the specified one, an error will be reported when using node resources.
2: Sniffing function
Start the sniff function through client.transport.sniff, so that you only need to specify a node in the cluster (not necessarily a master node), and then load other nodes in the cluster. This way, as long as the program keeps shutting down, you can still connect to other nodes even if this node goes down.
3: Query type SearchType.QUERY_THEN_FETCH
There are 4 types of query types in es query
QUERY_AND_FETCH:
The master node distributes the query request to all shards. Each shard is sorted and sorted according to its own query rules, namely the word frequency document frequency, and then returns the result to the master node. The master node summarizes and sorts all data and returns it to the client. This method only requires interaction with es once.
This query method has problems with data volume and sorting. The main node will summarize the data returned by all shards so that the data volume will be relatively large. Second, the rules on each shard may be inconsistent.
QUERY_THEN_FETCH:
The master node distributes the request to all shards. After each shard is sorted, the id and score of the data are returned to the master node. After receiving it, the master node summarizes and sorts it, and then reads the corresponding data to the corresponding node according to the sorted id and returns it to the client. This method requires interaction with es twice.
This method solves the data volume problem, but the sorting problem still exists and is the default query method of es
DEF_QUERY_AND_FETCH: and DFS_QUERY_THEN_FETCH:
Unify the rules of each shard for scoring. Solved the sorting problem, but DFS_QUERY_AND_FETCH still has data volume problems, DFS_QUERY_THEN_FETCH both have the best problem, but the efficiency is the worst.
1. Obtain client, two ways to obtain
@Before public void before() throws Exception { Map<String, String> map = new HashMap<String, String>(); map.put("cluster.name", "elasticsearch_wenbronk"); Settings.Builder settings = Settings.builder().put(map); client = TransportClient.builder().settings(settings).build() .addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("www.wenbronk.com"), Integer.parseInt("9300"))); } @Before public void before11() throws Exception { // Create the client, use the default cluster name, "elasticSearch"// client = TransportClient.builder().build()// .addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("www.wenbronk.com"), 9300)); // Specify the cluster configuration information through the setting object, and the configured cluster name Settings settings = Settings.settingsBuilder().put("cluster.name", "elasticsearch_wenbronk") // Set the cluster name // .put("client.transport.sniff", true) // Turn on sniff, it will not be connected after opening, the reason is unknown // .put("network.host", "192.168.50.37") .put("client.transport.ignore_cluster_name", true) // Ignore cluster name verification, and can connect to the cluster name if the cluster name is incorrect after opening // .put("client.transport.nodes_sampler_interval", 5) // Report an error, // .put("client.transport.ping_timeout", 5) // Report an error, ping waiting time, .build(); client = TransportClient.builder().settings(settings).build() .addTransportAddress(new InetSocketTransportAddress(new InetSocketAddress("192.168.50.37", 9300))); // Default 5s // How long does it take to open the connection, default 5s System.out.println("success connect"); }PS: The two methods given by the official website cannot be used, they need to be combined to use them, and I'm wasting my afternoon...
The meaning of other parameters:
Code:
package com.wenbronk.javaes;import java.net.InetAddress;import java.net.InetSocketAddress;import java.util.Date;import java.util.HashMap;import java.util.List;import java.util.Map;import java.util.concurrent.TimeUnit;import org.elasticsearch.action.bulk.BackoffPolicy;import org.elasticsearch.action.bulk.BulkProcessor;import org.elasticsearch.action.bulk.BulkProcessor.Listener;import org.elasticsearch.action.bulk.BulkRequest;import org.elasticsearch.action.bulk.BulkRequestBuilder;import org.elasticsearch.action.bulk.BulkResponse;import org.elasticsearch.action.delete.DeleteRequest;import org.elasticsearch.action.delete.DeleteResponse;import org.elasticsearch.action.get.GetResponse;import org.elasticsearch.action.get.MultiGetItemResponse;import org.elasticsearch.action.get.MultiGetResponse;import org.elasticsearch.action.index.IndexRequest;import org.elasticsearch.action.index.IndexResponse;import org.elasticsearch.action.update.UpdateResponse;import org.elasticsearch.action.update.UpdateResponse;import org.elasticsearch.client.transport.TransportClient;import org.elasticsearch.cluster.node.DiscoveryNode;import org.elasticsearch.common.settings.Settings;import org.elasticsearch.common.transport.InetSocketTransportAddress;import org.elasticsearch.common.unit.ByteSizeUnit;import org.elasticsearch.common.unit.ByteSizeValue;import org.elasticsearch.common.unit.TimeValue;import org.elasticsearch.common.xcontent.XContentBuilder;import org.elasticsearch.common.xcontent.XContentFactory;import org.elasticsearch.script.Script;import org.junit.Before;import org.junit.Test;import com.alibaba.fastjson.JSONObject;/** * Use the java API to operate elasticSearch * * @author 231 * */public class JavaESTest { private TransportClient client; private IndexRequest source; /** * Get the connection, the first way* @throws Exception */// @Before public void before() throws Exception { Map<String, String> map = new HashMap<String, String>(); map.put("cluster.name", "elasticsearch_wenbronk"); Settings.Builder settings = Settings.builder().put(map); client = TransportClient.builder().settings(settings).build() .addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("www.wenbronk.com"), Integer.parseInt("9300"))); }/** * View cluster information*/ @Test public void testInfo() { List<DiscoveryNode> nodes = client.connectedNodes(); for (DiscoveryNode node : nodes) { System.out.println(node.getHostAddress()); } } /** * Organize json strings, method 1, directly splicing*/ public String createJson1() { String json = "{" + "/"user/":/"kimchy/"," + "/"postDate/":/"2013-01-30/"," + "/"message/":/"trying out Elasticsearch/"" + "}"; return json; } /** * Create json with map */ public Map<String, Object> createJson2() { Map<String,Object> json = new HashMap<String, Object>(); json.put("user", "kimchy"); json.put("postDate", new Date()); json.put("message", "trying out elasticsearch"); return json; } /** * Create using fastjson*/ public JSONObject createJson3() { JSONObject json = new JSONObject(); json.put("user", "kimchy"); json.put("postDate", new Date()); json.put("message", "trying out elasticsearch"); return json; } /** * Use es help class*/ public XContentBuilder createJson4() throws Exception { // Create json object, One of the ways to create json XContentBuilder source = XContentFactory.jsonBuilder() .startObject() .field("user", "kimchy") .field("postDate", new Date()) .field("message", "trying to out ElasticSearch") .endObject(); return source; } /** * Save in the index* @throws Exception */ @Test public void test1() throws Exception { XContentBuilder source = createJson4(); // Store json in the index IndexResponse response = client.prepareIndex("twitter", "tweet", "1").setSource(source).get();// // The result is obtained String index = response.getIndex(); String type = response.getType(); String id = response.getId(); long version = response.getVersion(); boolean created = response.isCreated(); System.out.println(index + ": " + type + ": " + id + ": " + version + ": " + created); } /** * get API Get the specified document information*/ @Test public void testGet() {// GetResponse response = client.prepareGet("twitter", "tweet", "1")// .get(); GetResponse response = client.prepareGet("twitter", "tweet", "1") .setOperationThreaded(false) // Thread safe.get(); System.out.println(response.getSourceAsString()); } /** * Test delete API */ @Test public void testDelete() { DeleteResponse response = client.prepareDelete("twitter", "tweet", "1") .get(); String index = response.getIndex(); String type = response.getType(); String id = response.getId(); long version = response.getVersion(); System.out.println(index + " : " + type + ": " + id + ": " + version); } /** * Test update update API * Use updateRequest object* @throws Exception */ @Test public void testUpdate() throws Exception { UpdateRequest updateRequest = new UpdateRequest(); updateRequest.index("twitter"); updateRequest.type("tweet"); updateRequest.id("1"); updateRequest.doc(XContentFactory.jsonBuilder() .startObject() // Add fields that do not exist, replace existing fields.field("gender", "male") .field("message", "hello") .endObject()); UpdateResponse response = client.update(updateRequest).get(); // Print String index = response.getIndex(); String type = response.getType(); String id = response.getId(); long version = response.getVersion(); System.out.println(index + " : " + type + ": " + id + ": " + version); } /** * Test the update api, use client * @trows Exception */ @Test public void testUpdate2() throws Exception { // Updated with Script object // UpdateResponse response = client.prepareUpdate("twitter", "tweet", "1")// .setScript(new Script("hits._source.gender = /"male/""))// .get(); // UpdateResponse response = client.prepareUpdate("twitter", "tweet", "1")// .setDoc(XContentFactory.jsonBuilder()// .startObject()// .field("gender", "malelelele")// .endObject()).get(); // Use updateRequest object and script// UpdateRequest updateRequest = new UpdateRequest("twitter", "tweet", "1")// .script(new Script("ctx._source.gender=/"male/"")); // UpdateResponse response = client.update(updateRequest).get(); // UpdateResponse response = client.update(new UpdateRequest("twitter", "tweet", "1") .doc(XContentFactory.jsonBuilder() .startObject() .field("gender", "male") .endObject() )).get(); System.out.println(response.getIndex()); } /** * Test update * Use updateRequest * @throws Exception * @throws InterruptedException */ @Test public void testUpdate3() throws InterruptedException, Exception { UpdateRequest updateRequest = new UpdateRequest("twitter", "tweet", "1") .script(new Script("ctx._source.gender=/"male/"")); UpdateResponse response = client.update(updateRequest).get(); } /** * Test upsert method* @throws Exception * */ @Test public void testUpsert() throws Exception { // Set the query conditions, add the effective IndexRequest indexRequest = new IndexRequest("twitter", "tweet", "2") .source(XContentFactory.jsonBuilder() .startObject() .field("name", "214") .field("gender", "gfrerq") .endObject()); // Set update, find update settings below UpdateRequest upsert = new UpdateRequest("twitter", "tweet", "2") .doc(XContentFactory.jsonBuilder() .startObject() .field("user", "wenbronk") .endObject()) .upsert(indexRequest); client.update(upsert).get(); } /** * Test multi get api * Get from different index, type, and id */ @Test public void testMultiGet() { MultiGetResponse multiGetResponse = client.prepareMultiGet() .add("twitter", "tweet", "1") .add("twitter", "tweet", "2", "3", "4") .add("anothoer", "type", "foo") .get(); for (MultiGetItemResponse itemResponse : multiGetResponse) { GetResponse response = itemResponse.getResponse(); if (response.isExists()) { String sourceAsString = response.getSourceAsString(); System.out.println(sourceAsString); } } } /** * bulk Batch execution* A query can be updated or delete multiple documents */ @Test public void testBulk() throws Exception { BulkRequestBuilder bulkRequest = client.prepareBulk(); bulkRequest.add(client.prepareIndex("twitter", "tweet", "1") .setSource(XContentFactory.jsonBuilder() .startObject() .field("user", "kimchy") .field("postDate", new Date()) .field("message", "trying out Elasticsearch") .endObject())); bulkRequest.add(client.prepareIndex("twitter", "tweet", "2") .setSource(XContentFactory.jsonBuilder() .startObject() .field("user", "kimchy") .field("postDate", new Date()) .field("message", "another post") .endObject())); BulkResponse response = bulkRequest.get(); System.out.println(response.getHeaders()); } /** * Use bulk processor * @throws Exception */ @Test public void testBulkProcessor() throws Exception { // Create BulkPorcessor objectBulkProcessor bulkProcessor = BulkProcessor.builder(client, new Listener() { public void beforeBulk(long paramLong, BulkRequest paramBulkRequest) { // TODO Auto-generated method stub } // Execute public void afterBulk(long paramLong, BulkRequest paramBulkRequest, Throwable paramThrowable) { // TODO Auto-generated method stub } public void afterBulk(long paramLong, BulkRequest paramBulkRequest, BulkResponse paramBulkResponse) { // TODO Auto-generated method stub } }) // 1w requests to execute bulk .setBulkActions(10000) // 1gb data refresh bulk .setBulkSize(new ByteSizeValue(1, ByteSizeUnit.GB)) // Fixed 5s must be refreshed once.setFlushInterval(TimeValue.timeValueSeconds(5)) // Number of concurrent requests, 0 is not concurrent, 1 is allowed to execute concurrently.setConcurrentRequests(1) // Set back, execute after 100ms, maximum 3 requests.setBackoffPolicy( BackoffPolicy.exponentialBackoff(TimeValue.timeValueMillis(100), 3)) .build(); // Add a single request bulkProcessor.add(new IndexRequest("twitter", "tweet", "1")); bulkProcessor.add(new DeleteRequest("twitter", "tweet", "2")); // Close bulkProcessor.awaitClose(10, TimeUnit.MINUTES); // Or bulkProcessor.close(); }} tes2 code:
package com.wenbronk.javaes;import java.net.InetSocketAddress;import org.apache.lucene.queryparser.xml.FilterBuilderFactory;import org.elasticsearch.action.search.MultiSearchResponse;import org.elasticsearch.action.search.SearchRequestBuilder;import org.elasticsearch.action.search.SearchResponse;import org.elasticsearch.action.search.SearchType;import org.elasticsearch.client.transport.TransportClient;import org.elasticsearch.common.settings.Settings;import org.elasticsearch.common.settings.Settings.Builder;import org.elasticsearch.common.transport.InetSocketTransportAddress;import org.elasticsearch.common.unit.TimeValue;import org.elasticsearch.index.query.QueryBuilder;import org.elasticsearch.index.query.QueryBuilders;import org.elasticsearch.search.SearchHit;import org.elasticsearch.search.aggregations.Aggregation;import org.elasticsearch.search.aggregations.Aggregations.AggregationsBuilders;import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramInterval;import org.elasticsearch.search.sort.SortOrder;import org.elasticsearch.search.sort.SortParseElement;import org.junit.Before;import org.junit.Test;/** * Use the java API to operate elasticSearch * search API * @author 231 * */public class JavaESTest2 { private TransportClient client; /** * Get client object*/ @Before public void testBefore() { Builder builder = Settings.settingsBuilder(); builder.put("cluster.name", "wenbronk_escluster");// .put("client.transport.ignore_cluster_name", true); Settings settings = builder.build(); org.elasticsearch.client.transport.TransportClient.Builder transportBuild = TransportClient.build(); TransportClient client1 = transportBuild.settings(settings).build(); client = client1.addTransportAddress((new InetSocketTransportAddress(new InetSocketAddress("192.168.50.37", 9300)))); System.out.println("success connect to escluster"); } /** * Test query*/ @Test public void testSearch() {// SearchRequestBuilder searchRequestBuilder = client.prepareSearch("twitter", "tweet", "1");// SearchResponse response = searchRequestBuilder.setTypes("type1", "type2")// .setSearchType(SearchType.DFS_QUERY_THEN_FETCH)// .setQuery(QueryBuilders.termQuery("user", "test"))// .setPostFilter(QueryBuilders.rangeQuery("age").from(0).to(1))// .setFrom(0).setSize(2).setExplain(true)// .execute().actionGet(); SearchResponse response = client.prepareSearch() .execute().actionGet();// SearchHits hits = response.getHits();// for (SearchHit searchHit : hits) {// for(Iterator<SearchHitField> iterator = searchHit.iterator(); iterator.hasNext(); ) {// SearchHitField next = iterator.next();// System.out.println(next.getValues());// }// } System.out.println(response); } /** * Test scroll api * More efficient processing of large amounts of data*/ @Test public void testScrolls() { QueryBuilder queryBuilder = QueryBuilders.termQuery("twitter", "tweet"); SearchResponse response = client.prepareSearch("twitter") .addSort(SortParseElement.DOC_FIELD_NAME, SortOrder.ASC) .setScroll(new TimeValue(60000)) .setQuery(queryBuilder) .setSize(100).execute().actionGet(); while(true) { for (SearchHit hit : response.getHits().getHits()) { System.out.println("i am coming"); } SearchResponse response2 = client.prepareSearchScroll(response.getScrollId()) .setScroll(new TimeValue(60000)).execute().actionGet(); if (response2.getHits().getHits().length == 0) { System.out.println("oh no======"); break; } } } /** * Test multiSearch */ @Test public void testMultiSearch() { QueryBuilder qb1 = QueryBuilders.queryStringQuery("elasticsearch"); SearchRequestBuilder requestBuilder1 = client.prepareSearch().setQuery(qb1).setSize(1); QueryBuilder qb2 = QueryBuilders.matchQuery("user", "kimchy"); SearchRequestBuilder requestBuilder2 = client.prepareSearch().setQuery(qb2).setSize(1); MultiSearchResponse multiResponse = client.prepareMultiSearch().add(requestBuilder1).add(requestBuilder2) .execute().actionGet(); long nbHits = 0; for (MultiSearchResponse.Item item : multiResponse.getResponse()) { SearchResponse response = item.getResponse(); nbHits = response.getHits().getTotalHits(); SearchHit[] hits = response.getHits().getHits(); System.out.println(nbHits); } } /** * Test aggregation query*/ @Test public void testAggregation() { SearchResponse response = client.prepareSearch() .setQuery(QueryBuilders.matchAllQuery()) // First use query to filter out a part of it.addAggregation(AggregationBuilders.terms("term").field("user")) .addAggregation(AggregationBuilders.dateHistogram("agg2").field("birth") .interval(DateHistogramInterval.YEAR)) .execute().actionGet(); Aggregation aggregation2 = response.getAggregations().get("term"); Aggregation aggregation = response.getAggregations().get("agg2");// SearchResponse response2 = client.search(new SearchRequest().searchType(SearchType.QUERY_AND_FETCH)).actionGet(); } /** * Test terminate */ @Test public void testTerminateAfter() { SearchResponse response = client.prepareSearch("twitter").setTerminateAfter(1000).get(); if (response.isTerminatedEarly()) { System.out.println("ternimate"); } } /** * Filter query: greater than gt, less than lt, less than or equal to lte, greater than or equal to gte */ @Test public void testFilter() { SearchResponse response = client.prepareSearch("twitter") .setTypes("") .setQuery(QueryBuilders.matchAllQuery()) //Query all .setSearchType(SearchType.QUERY_THEN_FETCH) // .setPostFilter(FilterBuilders.rangeFilter("age").from(0).to(19) // .includeLower(true).includeUpper(true)) // .setPostFilter(FilterBuilderFactory .rangeFilter("age").gte(18).lte(22)) .setExplain(true) // Explain is true to indicate that the ranking is sorted according to the data relevance, and the highest matching keyword is in front.get(); } /** * Group query*/ @Test public void testGroupBy() { client.prepareSearch("twitter").setTypes("tweet") .setQuery(QueryBuilders.matchAllQuery()) .setSearchType(SearchType.QUERY_THEN_FETCH) .addAggregation(AggregationBuilders.terms("user") .field("user").size(0) // Group according to user // Size(0) is also 10).get(); } }The above is all the content of this article. I hope it will be helpful to everyone's learning and I hope everyone will support Wulin.com more.