關於Maven的使用就不再嗦了,網上很多,並且這麼多年變化也不大,這裡僅介紹怎麼搭建Hadoop的開發環境。
1. 首先創建工程
複製代碼代碼如下:mvn archetype:generate -DgroupId=my.hadoopstudy -DartifactId=hadoopstudy -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
2. 然後在pom.xml文件裡添加hadoop的依賴包hadoop-common, hadoop-client, hadoop-hdfs,添加後的pom.xml文件如下
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://maven.apache.org/POM/4.0.0" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>my.hadoopstudy</groupId> <artifactId>hadoopstudy</artifactId> <packaging>jar</packaging> <version>1.0-SNAPSHOT</version> <name>hadoopstudy</name> <url>http://maven.apache.org</url> <dependencies> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>2.5.1</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-hdfs</artifactId> <version>2.5.1</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>2.5.1</version> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> </dependencies></project>
3. 測試
3.1 首先我們可以測試一下hdfs的開發,這裡假定使用上一篇Hadoop文章中的hadoop集群,類代碼如下
package my.hadoopstudy.dfs;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.FSDataOutputStream;import org.apache.hadoop.fs.FileStatus;import org.apache.hadoop.fs.FileSystem;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IOUtils;import java.io.InputStream;import java.net.URI;public class Test { public static void main(String[] args) throws Exception { String uri = "hdfs://9.111.254.189:9000/"; Configuration config = new Configuration(); FileSystem fs = FileSystem.get(URI.create(uri), config); // 列出hdfs上/user/fkong/目錄下的所有文件和目錄FileStatus[] statuses = fs.listStatus(new Path("/user/fkong")); for (FileStatus status : statuses) { System.out.println(status); } // 在hdfs的/user/fkong目錄下創建一個文件,並寫入一行文本FSDataOutputStream os = fs.create(new Path("/user/fkong/test.log")); os.write("Hello World!".getBytes()); os.flush(); os.close(); // 顯示在hdfs的/user/fkong下指定文件的內容InputStream is = fs.open(new Path("/user/fkong/test.log")); IOUtils.copyBytes(is, System.out, 1024, true); }} 3.2 測試MapReduce作業
測試代碼比較簡單,如下:
package my.hadoopstudy.mapreduce;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;import org.apache.hadoop.util.GenericOptionsParser;import java.io.IOException;public class EventCount { public static class MyMapper extends Mapper<Object, Text, Text, IntWritable>{ private final static IntWritable one = new IntWritable(1); private Text event = new Text(); public void map(Object key, Text value, Context context) throws IOException, InterruptedException { int idx = value.toString().indexOf(" "); if (idx > 0) { String e = value.toString().substring(0, idx); event.set(e); context.write(event, one); } } } public static class MyReducer extends Reducer<Text,IntWritable,Text,IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); if (otherArgs.length < 2) { System.err.println("Usage: EventCount <in> <out>"); System.exit(2); } Job job = Job.getInstance(conf, "event count"); job.setJarByClass(EventCount.class); job.setMapperClass(MyMapper.class); job.setCombinerClass(MyReducer.class); job.setReducerClass(MyReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); }}運行“mvn package”命令產生jar包hadoopstudy-1.0-SNAPSHOT.jar,並將jar文件複製到hadoop安裝目錄下
這裡假定我們需要分析幾個日誌文件中的Event信息來統計各種Event個數,所以創建一下目錄和文件
/tmp/input/event.log.1
/tmp/input/event.log.2
/tmp/input/event.log.3
因為這裡只是要做一個列子,所以每個文件內容可以都一樣,假如內容如下
JOB_NEW ...
JOB_NEW ...
JOB_FINISH ...
JOB_NEW ...
JOB_FINISH ...
然後把這些文件複製到HDFS上
複製代碼代碼如下:$ bin/hdfs dfs -put /tmp/input /user/fkong/input
運行mapreduce作業
複製代碼代碼如下:$ bin/hadoop jar hadoopstudy-1.0-SNAPSHOT.jar my.hadoopstudy.mapreduce.EventCount /user/fkong/input /user/fkong/output
查看執行結果
複製代碼代碼如下:$ bin/hdfs dfs -cat /user/fkong/output/part-r-00000
以上就是本文的全部內容,希望對大家的學習有所幫助,也希望大家多多支持武林網。