File konfigurasi
M103 diganti dengan alamat layanan HDFS.
Untuk menggunakan klien Java untuk mengakses file pada HDFS, saya harus mengatakan bahwa file konfigurasi Hadoop-0.20.2/conf/core-site.xml adalah yang saya menderita kerugian besar di sini pada awalnya, jadi saya tidak dapat terhubung ke HDF dan file tidak dapat dibuat atau dibaca.
<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration><!--- global properties --><property><name>hadoop.tmp.dir</name><value>/home/zhangzk/hadoop</value><description>A base for other temporary Direktori. </cripence> </property> <!-Properti Sistem File-> <property> <name> fs.default.name </name> <value> hdfs: // linux-zzk-113: 9000 </ value> </property> </konfigurasi>
Item Konfigurasi: Hadoop.tmp.dir mewakili lokasi direktori di mana metadata disimpan pada node yang disebutkan, dan untuk node data, itu adalah direktori di mana data file disimpan pada node.
Item Konfigurasi: fs.default.name mewakili alamat IP yang disebutkan dan nomor port. Nilai default adalah file: ///. Untuk Java API, menghubungkan ke HDFS harus menggunakan alamat URL yang dikonfigurasi di sini. Untuk node data, node data mengakses node yang disebutkan melalui URL ini.
hdfs-site.xml
<? Xml Version = "1.0" encoding = "UTF-8"?> <!-Autogenerated oleh Cloudera Manager-> <donfiguration> <property> <name> dfs.namenode.name.dir </name> </Nilai> <///// Mnt/sdc1/dfs/nn </value> </value> </Properti/Properti/Mnt/sdc1/dfs/nn </value> </value> </value Property> <name> dfs.namenode.servicerpc-address </name> <value> m103: 8022 </value> </prop Property> <name> dfs.https.address </name> </name> <name> </value> </value> </Properti> <po property> <name> <nama> dfs.htppps. <property> <name>dfs.namenode.http-address</name> <value>m103:50070</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.blocksize</name> <value>134217728</value> </property> <property> <name>dfs.client.use.datanode.hostname</name> <value>false</value> </property> <property> <name>fs.permissions.umask-mode</name> <value>022</value> </property> <property> <name>dfs.namenode.acls.enabled</name> <value>false</value> </property> <property> <name> dfs.block.local-path-access.user </name> <value> cloudera-scm </value> </property> <property> <name> dfs.client.read.shortcircuit </name> <value> false </value> </properti> <name> dfs.domain.socket <value>/var/run/hdfs-sockets/dn </ value> </preate> <property> <name> dfs.client.read.shortcircuit.skip.checksum </name> <value> false </praff> <poperti> <name> <name> dfs.client.domain.socket.data.traff <name> dfs.datanode.hdfs-blocks-metadata.enabled </name> <value> true </ value> </propert> <propert> <name> fs.http.impfl </name> <value> </scistor.datavision.fs.httpFilesystem </value> </value> </scistor.datavision.fs.httpFilesystem </value> </value> </cistor.datavision.fs.httpFileStem </value> </value> </cistor
mapred-site.xml
<? XML Versi = "1.0" encoding = "UTF-8"?> <!-Autogenerated oleh Cloudera Manager-> <donfiguration> <property> <name> MapReduce.job.split.metainfo.MaxSize </name> <value> 100000000 </value> </properti> <co propert> </name. <value>120</value> </property> <property> <name>mapreduce.output.fileoutputformat.compress</name> <value>true</value> </property> <property> <name>mapreduce.output.fileoutputformat.compress.type</name> <value>BLOCK</value> </property> <property> <name> mapreduce.output.fileOutputFormat.compress.codec </name> <value> org.apache.hadoop.io.compress.snappycodec </ value> </preate> <propert> <name> <name> <orgreduce.map.output.com <property> <name> mapreduce.map.output.compress </name> <value> true </ value> </property> <property> <name> zlib.compress.level </name> <value> default_compression </value> </properti <name> <name> </name <//task.task. <name> mapreduce.map.sort.spill.percent </name> <value> 0.8 </value> </preate> <property> <name> MapReduce.reduce.shuffle.parallelcopies </name> <value> </value> </properti> <name> <name> MapReduce.task. <name> mapreduce.client.submit.file.replication </name> <value> 1 </value> </property> <property> <name> MapReduce.job.reduces </name> <value> 24 </value> </properties> <name> <name> </name </name.oSort.sort.sort.mb </name> <preate> <name> <name> <name> </name </name </task.io.sort.sort.mb </name> <preate> <name> <name> <name> <name> mapreduce.map.speculative </name> <value> false </ value> </property> <property> <name> mapreduce.reduce.speculative </name> <value> false </ value> </propert> <cove> </name> Properti </name> Properti. <name> mapreduce.job.reduce.slowStart.CompetedMaps </name> <value> 0.8 </value> </prop Property> <name> MapReduce.jobhistory.address </name> <value> M103: 10020 </value> </properti <co property> <name> <name> Mapreduce.jobhistory.eDRESS.EDRESS. <value> m103: 19888 </value> </prop Property> <name> MapReduce.jobhistory.webapp.https.address </name> <value> m103: 19890 </value> </property> <name> <name> <name> MAreduce.jobhistory.admin.address </name> <name> <name> MAreduce <name> mapreduce.framework.name </name> <value> yarn </ value> </property> <property> <name> yarn.app.mapreduce.am.staging-dir </name> <value>/user </ value> </properti <name> <name> </name.am.am.max-aTempts </value> </properties> <name> <name> </property> </Properti </Property> </Properti </name </name </name </name <name> </property </name </name </name </name 2 value 2 value <name> <name> yarn.app.mapreduce.am.resource.mb </name> <value> 2048 </value> </prop Property> <name> yarn.app.mapreduce.am.resource.cpu name. </propt> <property> <name> yarn.app.mapreduce.am.command-opts </name> <value> -djava.net.preferipv4stack = true -xmx1717986918 </value> </property> <co property> <name> MapReduce.map.java.optss </halue> </name> <name> MapReduce.Map.java.optsssssssss </sovent> <name> MapReduce.Map.java.optsssssssstss </value> <value> -djava.net.preferipv4Stack = true -xmx1717986918 </value> </prively> <propert> <name> mapreduce.reduce.java.opts </name> <value> -djava.net.preferipv4stack = true -xmx2 <name> yarn.app.mapreduce.am.admin.user.env </name> <value> ld_library_path = $ hadoop_common_home/lib/asli: $ java_library_path </value> </properti <name> <name> Mapreduce.mapory.memory> </value> </vice> <coperti> <name> MAPRERUCE.MAPORY <name> mapreduce.map.cpu.vcores </name> <value> 1 </value> </property> <property> <name> MapReduce.reduce.memory.mb </name> <value> 3072 </value> </properti Properti <nama> </name </name </name. <name> mapreduce.reduce.cpu.vcores </name> <value> 1 </ value> </property> <property> <name> mapreduce.application.classpath </name> <value>$HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,$MR2_CLASSPATH,$CDH_HCAT_HOME/share/hcatalog/*,$CDH_HIVE_HOME/lib/*,/etc/hive/conf,/opt/cloudera/parcels/CDH/lib/udps/*</value> </property> <property> <name> mapreduce.admin.user.env </name> <value> ld_library_path = $ hadoop_common_home/lib/asli: $ java_library_path </value> </properti> <name> <name> Mapreduce.shuffle.max.connections </name </property> <name> <name> MapReduce.shuffleffle.max.connections </name </name> <name> <name> <name> MAPREDUCE.Shufflex.Max.Connectionse </name
Gunakan Java API untuk mengakses file dan direktori HDFS
Paket com.demo.hdfs; impor java.io.bufferedInputStream; impor java.io.fileInputStream; import java.io.filenotfoundException; impor impor java.io.inputputstream; impor java.io.ioException; impor java.io.inputputstream; java.net.uri; impor org.apache.hadoop.conf.configuration; import org.apache.hadoop.fs.fsdatainputstream; org.apache.hadoop.fs.fsdataoutputstream; impor org.apache.hadoop.hadoop.filestatus; org.apache.hadoop.fs.path; import org.apache.hadoop.iOutils; impor org.apache.hadoop.util.progressable;/*** @author zhangzk*/class public fILECOPYTOHDFS {public static void Main (string [] args) low -{public static void Main (string [] args) oudto {public static void Main (string [] args) lowlicto {public static void Main (string [] args) low -clexto {public static void Main (string [] args) lowl {public static void Main (string [] args) lowl {public static void Main (string [] args) lemparan // deletefromhdfs (); // getDirectoryFromHDFS (); appendtohdfs (); readFromHDFS (); } catch (Exception e) {// TODO Auto-Entoerated Catch Block E.PrintStackTrace (); } akhirnya {System.out.println ("Success"); }}/** Unggah file ke hdfs*/private static void unggahTohdfs () melempar filenotfoundException, ioException {string localsrc = "d: //qq.txt"; String dst = "hdfs: //192.168.0.113: 9000/user/zhangzk/qq.txt"; InputStream in = new bufferedInputStream (FileInputStream baru (localsrc)); Konfigurasi conf = konfigurasi baru (); Sistem file fs = filesystem.get (uri.create (dst), conf); OutputStream out = fs.create (jalur baru (dst), baru progresable () {public void progress () {System.out.print (".");}}); Ioutils.copybytes (in, out, 4096, true); }/** Baca file dari hdfs*/private static void readfromhdfs () melempar filenotfoundException, ioException {string dst = "hdfs: //192.168.0.113: 9000/user/zhangzk/qq.txt"; Konfigurasi conf = konfigurasi baru (); Sistem file fs = filesystem.get (uri.create (dst), conf); FsdatainputStream hdfsinstream = fs.open (jalur baru (dst)); OutputStream out = FileOutputStream baru ("d: /qq-hdfs.txt"); byte [] iObuffer = byte baru [1024]; int readlen = hdfsinstream.read (iobuffer); while (-1! = readlen) {out.write (ioBuffer, 0, readlen); readlen = hdfsinstream.read (iObuffer); } out.close (); hdfsinstream.close (); fs.close (); } /** Tambahkan konten ke akhir file di HDFS di Append; CATATAN: Saat pembaruan file, Anda perlu menambahkan <property> <name> dfs.append.support </name> <value> true </ value> </propert>*/private static void appendTohdfs () melempar filenotfoundException, ioException {string dst = "hdfs: //192.168.uSception {string dst =" hdfs: //192.168.uxception {string dst = "hdfs: //192.19.19.19.uxception {hdfs: Konfigurasi conf = konfigurasi baru (); Sistem file fs = filesystem.get (uri.create (dst), conf); FsdataoutputStream out = fs.append (jalur baru (dst)); int readlen = "zhangzk tambahkan oleh hdfs java api" .getbytes (). panjang; while (-1! = readlen) {out.write ("Zhangzk Tambahkan oleh HDFS Java API" .getBytes (), 0, readlen); } out.close (); fs.close (); }/** Hapus file dari hdfs*/private static void deletefromhdfs () melempar filenotfoundException, ioException {string dst = "hdfs: //192.168.0.113: 9000/user/zhangzk/qq-BAK.TXT"; Konfigurasi conf = konfigurasi baru (); Sistem file fs = filesystem.get (uri.create (dst), conf); fs.deleteOnexit (jalur baru (DST)); fs.close (); }/** Transfer file dan direktori pada hdfs*/private static void getDirectoryFromHdfs () melempar FileNotFoundException, ioException {string dst = "hdfs: //192.168.0.113: 9000/user/zhangzk"; Konfigurasi conf = konfigurasi baru (); Sistem file fs = filesystem.get (uri.create (dst), conf); FileList filestatus [] = fs.liststatus (jalur baru (DST)); ukuran int = filelist.length; untuk (int i = 0; i <size; i ++) {System.out.println ("Name:" + FileList [i] .getPath (). getName () + "/t/tsize:" + filelist [i] .getLen ()); } fs.close (); }} Catatan: Untuk operasi yang ditambahkan, belum didukung sejak Hadoop-0.21. Untuk operasi yang ditambahkan, silakan merujuk ke dokumen di JavaEye.