随笔 - 175  文章 - 202  trackbacks - 0
<2011年7月>
262728293012
3456789
10111213141516
17181920212223
24252627282930
31123456

第一个Blog,记录哈哈的生活

常用链接

留言簿(16)

随笔分类

随笔档案

文章分类

文章档案

收藏夹

Java links

搜索

  •  

最新评论

阅读排行榜

评论排行榜

转自:http://blog.csdn.net/wang382758656/article/details/5771332

@import url(http://www.blogjava.net/CuteSoft_Client/CuteEditor/Load.ashx?type=style&file=SyntaxHighlighter.css);@import url(/css/cuteeditor.css);
1.Copy a file from the local file system to HDFS
The srcFile variable needs to contain the full name (path + file name) of the file in the local file system. 
The dstFile variable needs to contain the desired full name of the file in the Hadoop file system.

Configuration config = new Configuration();
  FileSystem hdfs = FileSystem.get(config);
  Path srcPath = new Path(srcFile);
  Path dstPath = new Path(dstFile);
  hdfs.copyFromLocalFile(srcPath, dstPath);



2.Create HDFS file
The fileName variable contains the file name and path in the Hadoop file system. 
The content of the file is the buff variable which is an array of bytes.

//byte[] buff - The content of the file

  Configuration config = new Configuration();
  FileSystem hdfs = FileSystem.get(config);
  Path path = new Path(fileName);
  FSDataOutputStream outputStream = hdfs.create(path);
  outputStream.write(buff, 0, buff.length);


3.Rename HDFS file
In order to rename a file in Hadoop file system, we need the full name (path + name) of 
the file we want to rename. The rename method returns true if the file was renamed, otherwise false.

Configuration config = new Configuration();
  FileSystem hdfs = FileSystem.get(config);
  Path fromPath = new Path(fromFileName);
  Path toPath = new Path(toFileName);
  boolean isRenamed = hdfs.rename(fromPath, toPath);



4.Delete HDFS file
In order to delete a file in Hadoop file system, we need the full name (path + name) 
of the file we want to delete. The delete method returns true if the file was deleted, otherwise false.

Configuration config = new Configuration();
  FileSystem hdfs = FileSystem.get(config);
  Path path = new Path(fileName);
  boolean isDeleted = hdfs.delete(path, false);

Recursive delete:
  Configuration config = new Configuration();
  FileSystem hdfs = FileSystem.get(config);
  Path path = new Path(fileName);
  boolean isDeleted = hdfs.delete(path, true);


 
  
5.Get HDFS file last modification time
In order to get the last modification time of a file in Hadoop file system, 
we need the full name (path + name) of the file.

Configuration config = new Configuration();
  FileSystem hdfs = FileSystem.get(config);
  Path path = new Path(fileName);
  FileStatus fileStatus = hdfs.getFileStatus(path);
  long modificationTime = fileStatus.getModificationTime


  
 6.Check if a file exists in HDFS
In order to check the existance of a file in Hadoop file system, 
we need the full name (path + name) of the file we want to check. 
The exists methods returns true if the file exists, otherwise false.

Configuration config = new Configuration();
  FileSystem hdfs = FileSystem.get(config);
  Path path = new Path(fileName);
  boolean isExists = hdfs.exists(path);


  
 7.Get the locations of a file in the HDFS cluster
 A file can exist on more than one node in the Hadoop file system cluster for two reasons:
Based on the HDFS cluster configuration, Hadoop saves parts of files on different nodes in the cluster.
Based on the HDFS cluster configuration, Hadoop saves more than one copy of each file on different nodes for redundancy (The default is three).
 

Configuration config = new Configuration();
  FileSystem hdfs = FileSystem.get(config);
  Path path = new Path(fileName);
  FileStatus fileStatus = hdfs.getFileStatus(path);

  BlockLocation[] blkLocations = hdfs.getFileBlockLocations(path, 0, fileStatus.getLen());

BlockLocation[] blkLocations = hdfs.getFileBlockLocations(fileStatus, 0, fileStatus.getLen());
     //这个地方,作者写错了,需要把path改为fileStatus
  int blkCount = blkLocations.length;
  for (int i=0; i < blkCount; i++) {
    String[] hosts = blkLocations[i].getHosts();
    // Do something with the block hosts
   }


8. Get a list of all the nodes host names in the HDFS cluster

  his method casts the FileSystem Object to a DistributedFileSystem Object. 
  This method will work only when Hadoop is configured as a cluster. 
  Running Hadoop on the local machine only, in a non cluster configuration will
   cause this method to throw an Exception.
   

Configuration config = new Configuration();
  FileSystem fs = FileSystem.get(config);
  DistributedFileSystem hdfs = (DistributedFileSystem) fs;
  DatanodeInfo[] dataNodeStats = hdfs.getDataNodeStats();
  String[] names = new String[dataNodeStats.length];
  for (int i = 0; i < dataNodeStats.length; i++) {
      names[i] = dataNodeStats[i].getHostName();
  }


  
  
程序实例

/*
 * 
 * 演示操作HDFS的java接口
 * 
 * */



import org.apache.hadoop.conf.*;
import org.apache.hadoop.fs.*;
import org.apache.hadoop.hdfs.*;
import org.apache.hadoop.hdfs.protocol.*;
import java.util.Date;

public class DFSOperater {

    /**
     * @param args
     */

    public static void main(String[] args) {

        Configuration conf = new Configuration();
        
        try {
            // Get a list of all the nodes host names in the HDFS cluster

            FileSystem fs = FileSystem.get(conf);
            DistributedFileSystem hdfs = (DistributedFileSystem)fs;
            DatanodeInfo[] dataNodeStats = hdfs.getDataNodeStats();
            String[] names = new String[dataNodeStats.length];
            System.out.println("list of all the nodes in HDFS cluster:"); //print info

            for(int i=0; i < dataNodeStats.length; i++){
                names[i] = dataNodeStats[i].getHostName();
                System.out.println(names[i]); //print info

            }
            Path f = new Path("/user/cluster/dfs.txt");
            
            //check if a file exists in HDFS

            boolean isExists = fs.exists(f);
            System.out.println("The file exists? [" + isExists + "]");
            
            //if the file exist, delete it

            if(isExists){
                 boolean isDeleted = hdfs.delete(f, false);//fase : not recursive

                 if(isDeleted)System.out.println("now delete " + f.getName());                 
            }
            
            //create and write

            System.out.println("create and write [" + f.getName() + "] to hdfs:");
            FSDataOutputStream os = fs.create(f, true, 0);
            for(int i=0; i<10; i++){
                os.writeChars("test hdfs ");
            }
            os.writeChars("/n");
            os.close();
            
            //get the locations of a file in HDFS

            System.out.println("locations of file in HDFS:");
            FileStatus filestatus = fs.getFileStatus(f);
            BlockLocation[] blkLocations = fs.getFileBlockLocations(filestatus, 0,filestatus.getLen());
            int blkCount = blkLocations.length;
            for(int i=0; i < blkCount; i++){
                String[] hosts = blkLocations[i].getHosts();
                //Do sth with the block hosts

                System.out.println(hosts);
            }
            
            //get HDFS file last modification time

            long modificationTime = filestatus.getModificationTime(); // measured in milliseconds since the epoch

            Date d = new Date(modificationTime);
         System.out.println(d);
            //reading from HDFS

            System.out.println("read [" + f.getName() + "] from hdfs:");
     FSDataInputStream dis = fs.open(f);
     System.out.println(dis.readUTF());
     dis.close();

        } catch (Exception e) {
            // TODO: handle exception

            e.printStackTrace();
        }
                
    }

}


posted on 2011-07-28 12:03 哈哈的日子 阅读(1326) 评论(2)  编辑  收藏

FeedBack:
# re: HDFS的JAVA接口API操作实例(转) 2011-07-29 09:05 tongxing
Hadoop是部署在linux下的,现在写程序都要先打成jar包然后放到里面运行。我在考虑个问题现在目前有个网站往该网站上传的数据量都要达到50G,所以能不能网站和hadoop的dfs对接然后上传。这样就不用把数据放到一台电脑上了。就是不知道能否实现?
貌似博客园的和这个账户不同啊?郁闷!  回复  更多评论
  
# re: HDFS的JAVA接口API操作实例(转) 2011-07-29 09:52 哈哈的日子
@tongxing
程序可以远程连过去的,应该不用放过去吧。

数据是比较容易放到 hdfs 上的,但,性能需要考虑一下。
可以确定的是,小文件性能不好。  回复  更多评论
  

只有注册用户登录后才能发表评论。


网站导航: