구성 파일
M103은 HDFS 서비스 주소로 대체됩니다.
Java 클라이언트를 사용하여 HDFS에서 파일에 액세스하려면 구성 파일 Hadoop-0.20.2/conf/core-site.xml이 처음에는 큰 손실을 입었으므로 HDFS에 연결할 수 없었고 파일을 작성하거나 읽을 수 없었습니다.
<? xml 버전 = "1.0"?> <? xml-stylesheet type = "text/xsl"href = "configuration.xsl"?> <configuration> <! ---- 글로벌 속성-> <posperty> <name> hadoop.tmp.dir </name> <value>/home/zhangzk/hadoop </value> <description> 기타 기본> 다른 기본> < Directries. </description> </property> <!-파일 시스템 속성-> <posity> <name> fs.default.name </name> <value> hdfs : // linux-zzk-113 : 9000 </value> </propertion> </configuration>
구성 항목 : hadoop.tmp.dir 메타 데이터가 명명 된 노드에 저장되는 디렉토리 위치를 나타냅니다. 데이터 노드의 경우 파일 데이터가 노드에 저장되는 디렉토리입니다.
구성 항목 : fs.default.name은 명명 된 IP 주소와 포트 번호를 나타냅니다. 기본값은 파일 : ///입니다. Java API의 경우 HDFS에 연결하면 여기에서 구성된 URL 주소를 사용해야합니다. 데이터 노드의 경우 데이터 노드는이 URL을 통해 명명 된 노드에 액세스합니다.
hdfs-site.xml
<? xml version = "1.0"alcoding = "utf-8"?> <!-Cloudera Manager에 의해자가 생성-> <spogation> <spaction> <name> dfs.namenode.name.dir </name> <value> 파일 : /// mnt/sdc1/dfs/nn </value> <property> <property> <name> dfs.namenode.servicerpc-address </name> <value> m103 : 8022 </value> </property> <spoction> <name> dfs.https.address </name> <value> m103 : 50470 </value> </property> <posidation> <posperation> <posperation> <speraty> <spogy> <value> <value> <value> 50470 <value> <property> <name> dfs.namenode.http-address </name> <value> m103 : 50070 </value> </property> <spoction> <name> dfs.replication </name> <value> 3 </value> </property> <name> dfs.blocksize </value> <13421728 </value> </property> <peraty> <peraty> <peraty> <peraty> <peraty> <spoction> <spoction> <spoction> <spoction> <spogy> <name> dfs.client.use.datanode.hostname </name> <value> false> value> </value> </property> <name> fs.prsmissions.umask-mode </name> <value> 022 </value> </property> <name> dfs.namenode.acls.enabled> false> </value> </value> </value> </value> </value> <name> dfs.block.local-path-access.user </name> <talue> cloudera-scm </value> </property> <spoction> <name> dfs.client.Read.ShortCircuit </name> <value> false> </value> <spogy> <name> dfs.domain.socket.path </name> <value>/var/run/hdfs-sockets/dn </value> </property> <posperty> <spoction> <name> dfs.client.read.shortcircuit.skip.checksum </name> </value> </value> </property> <spoction> <name> dfs.client.skett.data.traffic </value> </property> </property> <name> dfs.datanode.hdfs-blocks-metadata.enabled </name> </value> </value> </property> <spoction> <name> fs.http.impl </name> com.scistor.datavision.fs.httpfilesystem </value> </configuration>
mapred-site.xml
<? xml version = "1.0"alcoding = "utf-8"?> <!-Cloudera Manager에 의해자가 생성-> <spaction> <spoction> <name> mapreduce.job.split.metainfo.maxsize </name> value> 1000000000 </value> <posperdy> <name> mapreduce.job.counters.job.counters.job.counters.job.counters. <value> 120 </value> </property> <property> <name> mapreduce.output.fileoutputformat.compress </name> <value> true </value> </value> </property> <spoction> <name> mapreduce.output.fileoutputformat.compress.type </name> <value> </property> <property> <property> <name> mapreduce.output.fileoutputformat.compress.codec </name> <alug.apache.hadoop.io.compress.snappycodec </value> </property> <posperty> <name> mapreduce.map.output.compress.codec </name> <atole> org.apache.o.compress.snappyec </value> </value> </value> <name> mapreduce.map.output.compress </name> <value> true </value> </property> <spoction> <spaction> <name> zlib.compress.level </name> <alue> default_compression </value> <property> <name> mapreduce.task.io.sort.factor </value> 64 </value> </property> <property> <property> <name> mapreduce.map.sort.spill.percent </name> <value> </value> </value> </property> <spoction> <name> mapreduce.reduce.shuffe.parallelcopies </name> <value> 10 </value> </property> <name> mapreduce.task.time </name> 600000 </property> </property> </value> </value> </value> </value> </value> </property> <name> mapreduce.client.submit.file.replication </name> <value> 1 </value> </value> </property> <spoction> <name> mapreduce.job.job.retuces </name> <value> </value> </property> <pospact> <name> mapreduce.task.io.sort.mb </value> <name> mapreduce.map.speculative </name> <value> false </value> </property> <spoction> <spoction> <name> mapReduce.reduce.speculative </name> </value> </property> <posity> <name> mapreduce.reduce.Speculative </name> </value> <property> <property> <name> mapreduce.job.reduce.slowstart.completedMaps </name> <value> 0.8 </value> </property> <spoction> <name> mapreduce.jobhistory.address </name> <value> m103 : 10020 </value> </property> <spoction> <name> mapreduce.jobhistory.webapp.webpapdress <value> m103 : 19888 </value> </property> <posperty> <name> mapreduce.jobhistory.webhistory.webaph.https.address </name> <value> m103 : 19890 </value> </property> <posity> <name> mapreduce.jobhistory.admin.admin.dress <value> m103 : 10033 </value> <value> <name> mapreduce.framework.name </name> <value> yarn </value> </property> <posperty> <spoction> <spoction> <socation> <name> yarn.app.mapreduce.am.staging-dir </name> <value>/user </value> </value> <pospact> <name> mapreduce.am.max </name> <value> </value> </value> </value> </value> <name> yarn.app.mapreduce.am.resource.mb </name> <value> 2048 </value> </property> <spoction> <name> yarn.app.app.mapreduce.am.resource.cpu-vcores </name> <value> 1 </value> </propertion> <posperty> <same> name name> mapreduce.job.utAST <value> false> </value> </property> <posperty> <name> yarn.app.mapreduce.am.command-opts </name> <value> -djava.net.preferipv4stack = true -xmx1717986918 </value> </property> <property> <spoction> <name> mapreduce.map.java.java.opts <value> -djava.net.preferipv4Stack = true -xmx1717986918 </value> </property> <posity> <name> mapreduce.reduce.java.opts </name> <value> -djava.net.preipv4Stack = true -xmx2576980378 </value> </value> </value> </value> <이름> yarn.app.mapreduce.am.admin.user.env </name> <ld_library_path = $ hadoop_common_home/lib/native : $ java_library_path </value> </property> <property> <name> mapreduce.map.memory.mb </property> </property> </value> </value> </value> </property> <name> mapreduce.map.cpu.vcores </name> <value> 1 </value> </property> <property> <spoction> <name> mapreduce.reduce.memory.mb </name> <balue> 3072 </value> </property> <spoction> <name> mapreduce.map.cpu.vcores </value> 1 </value> </property> </value> <value> <value> <value>. <name> mapreduce.reduce.cpu.vcores </name> <value> 1 </value> </property> <posperty> <name> mapreduce.application.classpath </name> <value> $ hadoop_mapred_home/*, $ hadoop_mapred_home/lib/*, $ mr2_classpath, $ cdh_hcat_home/share/hcatalog/*, $ cdh_hive_home/lib/*,/etc/conf,/opt/cloudera/parcels/cdh/lib/lib/lib/lib/lib/lib/lib/lib/* <property> <name> mapreduce.admin.user.env </name> <daltured_library_path = $ hadoop_common_home/lib/lib/lib/lib/lib/avation : $ java_library_path> </value> <spoction> <same> mapreduce.shuffle.max.connections </name> <value> 80 </value> </propertation>
Java API를 사용하여 HDFS 파일 및 디렉토리에 액세스하십시오
package com.demo.hdfs; import java.io.bufferedInputStream; import java.io.fileinputStream; import java.io.filenotfoundException; import java.io.fileoutputStream; import java.io.ioexception; import java.inputstream; import java.io.outputstream. org.apache.hadoop.conf.configuration; import org.apache.hadoop.fs.fsdatainputstream; import org.apache.hadoop.fs.fsdataOutputStream; import org.apache.hadoop.fs.filestatus; import org.apache.hadoop.fs.filesystem; import org.hadoop org.apache.hadoop.io.ioutils; import org.apache.hadoop.util.progressable;/*** @author zhangzk*/public class filecopytohdfs {public static void main (String [] args) 예외 {try {// uploadtohdfs (); // deletefromhdfs (); // getDirectoryfromhdfs (); 부록 hdfs (); readfromhdfs (); } catch (예외 e) {// todo 자동 생성 캐치 블록 e.printstacktrace (); } 마침내 {system.out.println ( "성공"); }}/** 파일을 hdfs*/private static void uploadtohdfs ()를 filenotfoundexception, ioexception {String localsrc = "d : //qq.txt"; 문자열 dst = "hdfs : //192.168.0.113 : 9000/user/zhangzk/qq.txt"; inputStream in = new bufferedInputStream (new FileInputStream (localsrc)); configuration conf = 새로운 구성 (); FileSystem fs = filesystem.get (uri.create (dst), conf); outputStream out = fs.create (new Path (dst), new Progressable () {public void progress () {System.out.print ( ".");}}); ioutils.copybytes (in, out, 4096, true); }/** hdfs에서 파일 읽기*/private static void readfrfromhdfs ()는 filenotfoundexception, ioexception {string dst = "hdfs : //192.168.0.113 : 9000/user/zhangzk/qq.txt"; configuration conf = 새로운 구성 (); FileSystem fs = filesystem.get (uri.create (dst), conf); fsdatainputStream hdfsinstream = fs.open (new Path (DST)); outputStream out = 새 FileOutputStream ( "d : /qq-hdfs.txt"); 바이트 [] iobuffer = 새로운 바이트 [1024]; int readlen = hdfsinstream.read (iobuffer); while (-1! = readlen) {out.write (iobuffer, 0, readlen); readlen = hdfsinstream.read (iobuffer); } out.close (); hdfsinstream.close (); fs.close (); } /** append에서 HDFS의 파일 끝에 컨텐츠 추가; 참고 : 파일 업데이트시 <property> <name> dfs.append.support </name> <value> true> </propertic>*/private static void accendtohdfs ()를 filenotfoundexception, ioexception {string dst = "hdfs : //192.168.0.113 : 9,000/User/Zhangz K/q.Txt"; configuration conf = 새로운 구성 (); FileSystem fs = filesystem.get (uri.create (dst), conf); fsdataOutputStream out = fs.Append (new Path (DST)); int readlen = "zhangzk add by hdfs java api".getBytes (). 길이; while (-1! = readlen) {out.write ( "zhangzk add by hdfs java api".getBytes (), 0, readlen); } out.close (); fs.close (); }/** hdfs*/private static void deletefromhdfs ()에서 파일을 삭제합니다. filenotfoundexception, ioexception {string dst = "hdfs : //192.168.0.113 : 9000/user/zhangzk/qq-bak.txt"; configuration conf = 새로운 구성 (); FileSystem fs = filesystem.get (uri.create (dst), conf); fs.deleteOnexit (새로운 경로 (DST)); fs.close (); }/** HDFS에서 파일 및 디렉토리 전송*/개인 정적 무효 GetDirectoryFromHdfs ()는 filenotfoundException을 던지고 ioException {String dst = "hdfs : //192.168.0.113 : 9000/user/zhangzk"; configuration conf = 새로운 구성 (); FileSystem fs = filesystem.get (uri.create (dst), conf); filestatus filelist [] = fs.liststatus (new Path (DST)); int size = filelist.length; for (int i = 0; i <size; i ++) {system.out.println ( "name :" + filleList [i] .getPath (). getName () + "/t/tsize :" + filleList [i] .getlen ()); } fs.close (); }} 참고 : Append Operations의 경우 Hadoop-0.21 이후 지원되지 않았습니다. 추가 작업은 Javaeye의 문서를 참조하십시오.